MS-SQL Post-exploitation In-depth Workshop @ ToorCon 2014!

Come join Noelle Murata and myself (Rob Beck) for a hands-on workshop at ToorCon 2014 in San Diego this October.  It’s been a while in the making, but we’re looking forward to delivering 2 days of Microsoft SQL Server shenanigans, code samples, workshops, and general database nerdery in the MS-SQL environment.  Registration is open and the workshop is scheduled for October 22nd and 23rd at the San Diego Westin Emerald Plaza!

Workshop Overview:

The MS-SQL Post-exploitation In-depth workshop demonstrates the tactics an attacker can employ to maintain persistence in a Microsoft SQL Server database, while harnessing the available facilities to expand their influence in an environment. Plenty of resources exist today that show methods for compromising SQL and SQL-dependent applications to achieve access to the environment, very few provide methods for maintaining control of a SQL instances or performing attacks against the host and environment from within the SQL service.

This course will offer attendees an understanding of the various facilities that are available for executing system level commands, scripting, and compiling code… all from inside the SQL environment, once privileged access has been acquired. Students will walk away from this two-day course with a greater understanding of:

  • MS-SQL specific functionality
  • Stored procedures
  • Extended stored procedures
  • SQL assemblies
  • SQL agent
  • SQL internals
  • Conducting attacks and assessments from inside the SQL environment
  • Methods employed for stealth inside of SQL

Upon the completion of this workshop, attendees will:

  • Be familiar with multiple facilities in the SQL Server environment for executing system commands.
  • Understand ways to execute arbitrary code and scripts from within the database.
  • Understand methods for operating with stealth in the SQL service.
  • Know ways an attacker can rootkit or backdoor the SQL service for persistence.
  • Be familiar with hooking internal SQL functions for data manipulation.
  • Harvest credentials and password hashes of the SQL server.
  • Have familiarity with the extended stored procedure API.
  • Be able to create and deploy SQL assemblies.
  • Have the ability to impersonate system and domain level users for performing escalation in the environment.

Attendee requirements for this workshop:

  • Modern laptop with wired or wireless networking capabilities.
  • Ability to use Microsoft remote desktop from their system.
  • Basic understanding of the T-SQL language and syntax.
  • Ability to follow along with coding/scripting concepts (coding experience a plus, but not required – languages include: C, C++, C#, vbscript, jscript, and powershell)
  • Ability to navigate Visual Studio and OllyDBG (previous experience a plus, but not required.)

Attendees will be provided with:

  • Hosted VMs for testing and workshop labs.
  • Training materials – presentation materials and lab examples.

Who should attend this workshop?

  • SQL administrators and security personnel.
  • Professional pen-testers and corporate security team members.
  • Incident response analysts for new methods of attack detection.
  • Forensic team members unfamiliar with SQL related attack patterns.
  • Anyone interested in furthering their understanding of SQL Server.

A Tale of Two Professionals

Phones Aren’t the Only Things That Can Get Burned

On a recent engagement I was tasked with reviewing a mobile application that provides users with disposable phone numbers. The application I was testing provided phone numbers to mobile users, permitting users to make VoIP phone calls, as well as receive SMS and picture messages; I will omit the actual application name, since it has no impact on the information being disclosed and I’m not out to shame a specific developer.  As part of their service offering, you could acquire phone numbers in various countries, as well as, various regions and cities in those countries, allowing for “local” phone calls, reducing costs for inbound and outbound regional calls.

During the initial setup of the application, new users are provided a free phone number to use during a three day trial period.  The trial period provides all features of the service including making and receiving phone calls, sending and receiving SMS messages, sending and receiving picture messages, caller identification (“Caller ID”), and voicemail.  At the conclusion of the three day trial period, users are asked to renew the phone number or acquire a different number, using extremely short-term or long-term subscriptions.  One of the selling points of the service is permitting users to create “burner”, or disposable, phone numbers that they can use for a specific period of time and for specific purposes.

Another feature of the service is the ability to have multiple phone numbers, to be used on a single mobile device.  This allows users to have multiple phone numbers, for specific purposes, in a variety of regions around the world.  As part of my testing, I navigated the various menus, going through the process of acquiring a phone number in another country.  The most important thing to note here was that after selecting my country and region/city of choice, I was presented with a list of possible phone numbers I could acquire for that area.  Not only did this allow me to enumerate possible phone numbers for this service, at least the ones not currently in use, it indicated that the service had a finite amount of available phone numbers in any area; this “feature” might be indication enough that privacy and anonymity wasn’t on the forefront of the developer’s mind.  It was only after I had selected a number that I was prompted with the various pricing models available to procure the phone number for personal use.

Because of the various pricing options, as well as the trial period, I opted to put the project on hold and move on to another application so that I could allow the trial period to expire.  This would allow me to determine pricing models following the trial period, as well as, anything else the application might want to charge me for.  I put the application in the background and went about conducting my testing on additional applications.

Fast-forward 48 hours later.  I decided to check up on the previous VoIP application to determine if there were any additional notifications for payment, warnings of trial expiration, and to begin wrapping up my testing to begin documentation.  I was surprised to see that the application had logged in excess of 20 missed calls and had a backlog of SMS messages from 10 or more random people.  If you’ll recall, I established that this service had a finite number of available phone numbers, I was provided a free phone number to test during the trial period, and this means that the number I was provided was previously used by another user of the system.

Going based solely on the contents of the SMS messages received, as well as some of the voicemails left on my trial number messaging service, the previous owner was also a specialized professional who is use to charging an hourly rate; let’s just say that her chosen profession was of a much more discreet and intimate nature.  I was presented with text upon text message asking if he/she was available, what their hourly rate was, as well as a few much more graphic explanations of specific requests the potential clients would like performed.  What was more surprising, and traumatizing, was that some of these individuals had chosen to send naughty-gram picture messages of their previous work with this professional, personal pictures in admiration of this person, and… well, you have an imagination.

None of the individuals contacting this number had any indication that the person they were trying to contact (no pun intended) had been using a burnable phone number.  The problem was made worse for them because of the features provided by this service, as previously mentioned the VoIP service offers Caller ID; I was not only receiving the correspondence from this lengthy list of previous contacts, but now I had the phone numbers they were using to reach me.

A sample of the least explicit messages received.

A sample of the least explicit messages received.

This situation now not only posed a risk to the previous owner of this phone number, permitting me access to their contacts who had reached out to her, but exposed her clients and potential clients to exposure from an unknown individual now in possession of their information.  While it would be nice to assume that the individuals attempting to correspond with the previous owner of the number were also using temporary phone numbers, this isn’t a perfect world and people rarely take the steps needed to ensure their privacy if they don’t feel that they’re at risk; after all, some amount of this sort of business is based on a level of trust and unwritten understanding between the professionals and their clients.

I’m not here to provide commentary on the nature of the previous individual’s chosen profession or hobby, to each their own, but this situation presented an extreme introduction into some of the dangers of the burner phone culture some of us have come to accept.  While many of us can see the value of having a disposable phone number and messaging, easily hopping between numbers for both legitimate and illegitimate purposes, I don’t think many people have realized the repercussions of being the recipient of a disposable resource.  Even in the age of services such as Google Voice, assumptions are made that the numbers we’re corresponding with have a reasonable time to live with the person that provided it to us.

With a minimal amount of social engineering, much more information could have been captured from these individuals.  Due to the disclosure of their phone numbers coupled with the power of Google and other search engines, the potential for extortion by a random individual who is now in possession of compromising photos is also a reality.  The next time we make a phone call, or send a SMS with questionable content, we have to ask ourselves – do we really know who is receiving this or have we also been burned?

Who owns and regulates MY Facebook data?

My previous post briefly described the data that makes up a user’s Facebook data and this post will try to shed light on who owns and regulates this data.

I am probably not going out on a limb here to say that the majority of Facebook’s registered users have not read the privacy statement. I was like the majority of users myself, in that I did not fully read Facebook’s privacy statement upon signing up for the service. Facebook created a social media network online, and there were few requirements previously defined for such types of business in America or the world. A lack of rules, combined with users constantly uploading more data, has allowed Facebook to maximize the use of your data and create a behemoth of a social media networking business.

Over time, Facebook has added features to allow users to self regulate their data by limiting others (whether Facebook users or general Internet public) from viewing certain data that one might want to share with only family or specific friends. This provided a user with the sense of ownership and privacy as the creator of the data could block or restrict friends and search providers from viewing their data. Zuckerberg is even quoted by WSJ as saying “The power here is that people have information they don’t want to share with everyone. If you give people very tight control over what information they are sharing or who they are sharing with they will actually share more. One example is that one third of our users share their cell phone number on the site”.

In addition to privacy controls, Facebook gave users more insight into their data through a feature that allowed a user to download ‘all’ their data through a button in the account settings. I placed ‘all’ in quotes because, while you could download your Facebook profile data, this did not include data including wall comments, links, information tagged by other Facebook users or any other data that you created during your Facebook experience. Combined, privacy controls and data export are the main forms of control that Facebook gives to their users for ownership of profile, pictures, notes, links, tags and comment data since Facebook went live in 2004.

So now you might be thinking problem solved; restricting your privacy settings on the viewing of information and downloading ‘all’ your information fixes everything for you. Well, I wish that was the case with Facebook business operations. An open letter by 10 Security professionals to the US Congress highlighted that this was not simply the way things worked with Facebook and third party Facebook developer’s operations. Facebook has reserved the right to change their privacy statement at any time with no notice to the user and Facebook has done this a few times, to an uproar from their user base. As Facebook has grown in popularity and company footprint, security professionals along with media outlets have started publishing security studies painting Facebook in a darker light.

As highlighted by US Congress in December 2011, Facebook was not respecting user’s privacy when sharing information to advertisers or when automatically enabling contradicting privacy settings on new services to their users.  Facebook settled with the US Congress on seven charges of deceiving the user by telling them they could keep their data private.  From my perspective it appears that Facebook is willing to contradict their user’s privacy to suit their best interest for shareholders and business revenue.

In additional privacy mishaps, Facebook was found by an Austrian student to be storing user details even after a user deactivates the service. This started an EU versus Facebook initiative over the Internet that put heat on Facebook to give more details on length of time data was being retained for current and deactivated users.  Holding on to user data is lucrative for Facebook as this allows them to claim more users in selling to advertising subscribers as well as promoting the total user base for private investor bottom lines.

So the next step one might ask is “who regulates my data held by social media companies?” Summed up quickly today, no one outside Facebook is regulating your data and little insight is given to users on this process. The governments of the US, along with the European Union, are looking at means of regulating Facebook’s operations using things such as data privacy regulations and the US/EU Safe Harbor Act.  With Facebook announcing their initial public offering of five billion USD there is soon to be more regulations, at least financially, to hit Facebook in the future.

As an outcome of the December 2011 investigation by the United States Congress, Facebook has agreed to independent audits by third parties, presumably of their choosing. I have not been able to identify details regarding the subject of these audits or ramifications for findings from an audit. Facebook has also updated the public statement and communication to developers and now states that deactivated users will have accounts deleted after 30 days. I have yet to see a change in Facebook’s operations for respecting their user’s privacy settings when pertaining to third parties and other outside entities – in fairness they insist data is not directly shared for advertising; although some British folks may disagree with Facebook claims of advertising privacy.

From an information security perspective, my ‘free’ advice to businesses, developers and end users, do not accesses or give more data than necessary for your user experience as this only brings trouble in the long run. While I would like to give Facebook the benefit of the doubt in their operations, I personally only give data that I am comfortable sharing with the world even though it is limited to friends.  In global business data privacy regulations vary significantly between countries, with regulations come requirements and everyone knows that failing requirements results to fines so business need to think about only access appropriate information and accordingly restricting access.  For the end user, or Facebook’s product, remember that Facebook can change their privacy statement at their leisure and Facebook is ultimately a business with stakeholders that are eager to see quarter after quarter growth.

I hope this post has been insightful to you; please check back soon for my future post on how your Facebook data is being used and the different entities that want to access your data.

What Makes Up Facebook Data?

This is the first post in our  Social Networking series.

My guess is that you would not simply give a person that knocked on your front door or approached you in the street most of the data Facebook collects in your profile. Facebook profile data consists of many things, including your birth date, email, physical address, current location, work history, education history and additional information you input for activities, interests and music (interestingly much of this can be used for identity theft…) In addition to your profile data, any installed or authenticated Facebook applications have access to your wall posts and list of friends as well as any other data that is shared with “Everyone”.

As Facebook adds new features, the data included in your face book profile has probably crept to include other data of uploaded pictures, application usage and history, tags in posts or pictures. Facebook will always be looking for ways to collect more of your data as YOU are their product. Your data, data of friends and data of everyone else on Facebook is where Facebook collects their profit and, as with most businesses, profits need to increase through expanding markets and giving access to their product.

The data collected by Facebook on you can also include cookie tracking by Facebook even when you are not explicitly on their website.  Facebook heard much uproar from the user community when a security researcher in September 2011 [link] discovered Facebook was even tracking users that had gone as far as deactivating their accounts! Facebook could then track all web history even through web sites that are not related to Facebook activities in any way.

You do have the ability to limit data on Facebook and make sound decisions on what personal data you do decide to submit to Facebook (friends are another matter). Inherently by using Facebook for the ‘free’ services, you are going to lose some control of your information you share with friends. There are a few important factors that you should think about in dealing with social media and my next post will shine some light on who actually owns and regulates your data within Facebook; stayed tuned and feed back is always welcome.

Facebook Applications Have Nagging Vulnerabilities

By Neohapsis Researchers Andy Hoernecke and Scott Behrens

This is the second post in our Social Networking series. (Read the first one here.)

As Facebook’s application platform has become more popular, the composition of applications has evolved. While early applications seemed to focus on either social gaming or extending the capabilities of Facebook, now Facebook is being utilized as a platform by major companies to foster interaction with their customers in a variety forms such as sweepstakes, promotions, shopping, and more.

And why not?  We’ve all heard the numbers: Facebook has 800 million active users, 50% of whom log on everyday. On average, more than 20 million Facebook applications are installed by users every day, while more than 7 million applications and websites remain integrated with Facebook. (1)  Additionally, Facebook is seen as a treasure trove of valuable data accessible to anyone who can get enough “Likes” on their page or application.

As corporate investments in social applications have grown, Neohapsis Labs researchers have been requested to help clients assess these applications and help determine what type of risk exposure their release may pose. We took a sample of the applications we have assessed and pulled together some interesting trends. For context, most of these applications are very small in size (2-4 dynamic pages.)  The functionality contained in these applications ranged from simple sweepstakes entry forms and contests with content submission (photos, essays, videos, etc.) to gaming and shopping applications.

From our sample, we found that on average the applications assessed had vulnerabilities in 2.5 vulnerability classes (e.g. Cross Site Scripting or SQL Injection,) and none of the applications were completely free of vulnerabilities. Given the attack surface of these applications is so small, this is a somewhat surprising statistic.

The most commonly identified findings in our sample group of applications included Cross-Site Scripting, Insufficient Transport Layer Protection, and Insecure File Upload vulnerabilities. Each of these vulnerabilities classes will be discussed below, along with how the social networking aspect of the applications affects their potential impact.

Facebook applications suffer the most from Cross-Site Scripting. This type of vulnerability was identified on 46% of the applications sampled.  This is not surprising, since this age old problem still creeps up into many corporate and personal applications today.  An application discovered to be vulnerable to XSS could be used to attempt browser based exploits or to steal session cookies (but only in the context of the application’s domain.)

These types of applications are generally framed inline [inling framing, or iframing, is a common HTML technique for framing media content] on a Facebook page from the developer’s own servers/domain. This alleviates some of the risk to the user’s Facebook account since the JavaScript can’t access Facebook’s session cookies.  And even if it could, Facebook does use HttpOnly flags to prevent JavaScript from accessing session cookies values.  But, we have found that companies have a tendency to utilize the same domain name repeatedly for these applications since generally the real URL is never really visible to the end user. This means that if one application has a XSS vulnerability, it could present a risk to any other applications hosted at the same domain.

When third-party developers enter the picture all this becomes even more of a concern, since two clients’ applications may be sharing the same domain and thus be in some ways reliant on the security of the other client’s application.

The second most commonly identified vulnerability, affecting 37% of the sample, was Insufficient Transport Layer Protection While it is a common myth that conducting a man-in-the-middle attack against cleartext protocols is impossibly difficult, the truth is it’s relatively simple.  Tools such as Firesheep aid in this process, allowing an attacker to create custom JavaScript handlers to capture and replay the right session cookies.  About an hour after downloading Firesheep and looking at examples, we wrote a custom handler for an application that was being assessed that only used SSL when submitting login information.   On an unprotected WIFI network, as soon as the application sent any information over HTTP we had valid session cookies, which were easily replayed to compromise that victim’s session.

Once again, the impact of this finding really depends on the functionality of the application, but the wide variety of applications on Facebook does provide a interesting and varied landscape for the attacker to choose from.  We only flagged this vulnerability under specific circumstance where either the application cookies were somehow important (for example being used to identify a logged in session) or the application included functionality where sensitive data (such as PII or credit card data) was transmitted.

The third most commonly identified finding was Insecure File Upload. To us, this was surprising, since it’s generally not considered to be one of the most commonly identified vulnerabilities across all web applications. Nevertheless 27% of our sample included this type of vulnerability. We attribute its identification rate to the prevalence of social applications that include some type of file upload functionality (to share an avatar, photo, document, movie, etc.)

We found that many of the applications we assessed have their file upload functionality implemented in an insecure way.  Most of the applications did not check content type headers or even file extensions.  Although none of the vulnerabilities discovered led to command injection flaws, almost every vulnerability exploited allowed the attacker to upload JavaScript, HTML or other potentially malicious files such as PDF and executables.  Depending on the domain name affected by this vulnerability, this flaw would aid in the attacker’s social engineering effort as the attacker now has malicious files on a trusted domain.

Our assessment also identified a wide range of other types of vulnerabilities. For example, we found several of these applications to be utilizing publicly available admin interfaces with guessable credentials. Furthermore, at least one of the admin interfaces was riddled with stored XSS vulnerabilities. Sever configurations were also a frequent problem with unnecessary exposed services and insecure configuration being repeatedly identified.

Finally, we also found that many of these web applications had some interesting issues that are generally unlikely to affect a standard web application. For example, social applications with a contest component may need to worry about the integrity of the contest. If it is possible for a malicious user to game the contest (for example by cheating at a social game and placing a fake high score) this could reflect badly on the application, the contest, and the sponsoring brand.

Even though development of applications integrated with Facebook and other social network sites in increasing, we’ve found companies still tend to handle these outside of their normal security processes. It is important to realize that these applications can present a risk and should be thoroughly examined just like traditional stand alone web applications.

(Red)Herring v. United States Revisited

By N. Puffer

It’s just about two years since the Supreme Court decided Herring, so I figured I’d take a look back and see what, if any, impact it’s had. At the time of the finding several people in the infosec community were worried by Justice Ginsburg’s dissent; which pointed out the dangers of removing any penalty for a lack of integrity in systems used by law enforcement.

To recap the story so far …

In 1981 a few cops in California were tracking down some drug dealers. While watching the people come and go they identified a couple of other people, got the appropriate warrants, and arrested them. The problem was the courts pencil whipped the warrant; the police didn’t actually meet the needed rigor for a search. A lot of lawyers did a lot of talking, and we ended up with United States v. Leon (1984) saying that if the police were acting in good faith (in this case they were), then the exclusionary rule doesn’t apply. Sounds scary, and Orwell would love the coincidence in dates. What the courts really seem to be saying is that the Justice system is run by people, and has for a long time acknowledged that people are fallible (appeals process). If good work comes from an honest mistake society shouldn’t be punished by letting a drug dealer go free.

Of course, detractors may say that ignorance of the law is no excuse for citizens (I didn’t know the speed limit officer), and that members of the justice system should be held to a higher standard when it comes to a persons Freedom. Fair points. Feel free to discuss among yourselves.

Fast forward to 2004, the age of networked policing, and Alabama. Mr. Herring goes to the Coffee County police station to pick up an impounded vehicle. As part of a routine warrant check, neighboring Dale County tells a Coffee County investigator that there’s an outstanding warrant for Mr. Herring. The vehicle is searched, weapons and meth are found, hilarity does not ensue. The issue? Turns out Dale County had made a mistake in their data entry. The warrant had been recalled months prior to the incident, but the system of reference (the database) was incorrect. Part of the process of warrant notification includes pulling the actual paper warrant (system of record) and faxing it. When it was discovered that the paper warrant didn’t exist a check was performed and Coffee County was notified. Elapsed time to correct the mistake, 15 minutes. Time served by Mr. Herring, 27 months.

Five years of legal workings later, and a 5-4 ruling of the Supreme Court upheld Mr. Herring’s conviction. The central supporting opinion seemed to reach back to Leon. However, the dissenting opinion submitted by Justice Ginsburg touched on what caused the watchdogs to perk up. As mentioned above, from an information management point of view, the police actions by Dale County were fundamentally flawed. Specifically, a system of reference was used to trigger a critical action, even though procedurally, a system of record needed to be consulted. Given the fact that the mistake only caused a 15 minute delay, it’s reasonable to assume that there would have been no tangible impact to law enforcement if both systems were checked, but that wasn’t really the point.

The issue seemed to be, as decided, do errors in information systems (court records) extend good faith exceptions to the exclusionary rule? From section ‘A’ of Justice Ginsburg’s dissent, “Is it not altogether obvious that the Department could take further precautions to ensure the integrity of its database? The Sheriff’s Department is in a position to remedy the situation and might well do so if the exclusionary rule is there to remove the incentive to do otherwise.”

So, everyone agrees that there was a mistake in the Dale County records. It’s also agreed that the mistakes were negligent (Justice Roberts Opinion), and the courts ruled that even though the system was flawed, it wasn’t enough to exclude the results of the system. Furthermore, according to Justice Ginsberg, there’s no incentive to fix the problem. And now we’re back to the beginning; Police don’t need to ensure that systems are accurate, much less secure, as long as they aren’t complicit. There’s no motivation to ensure system integrity. In fact, there’s a motivation to not know how bad the systems are; if you knew, you may have to fix it.

Yet in the past two years there’s no evidence that law enforcement is purposely letting their systems atrophy to game the courts. A search of citations brings up McDonald v City of Chicago, which has a tangential citation of Herring in a right to bear arms case. US v Farias-Gonzales also comes up. This is a case concerning unreasonable search and seizure, but the most technological part of the case is that a portable fingerprint scanner was used.

People v Branner also comes up, and yet again the issue is with people and not systems. In this case cops working on an outdated knowledge of judicial findings. And you can keep searching; Montejo v Louisiana, People v. Lopez; all dealing with people or straight forward citations.

People v. Washebek filed in November of 2010, comes closer. Here the prosecution successfully argued that Herring rejected the distinction between law-enforcement error and errors in court records. The facts of the case concerned a search based on probation status that was incorrect, a mistake in probationary record-keeping.

So in two years, that’s a single similar filing, and nothing about widespread flaws in a law enforcement system. There are of course, other writings about this ruling. Some feel this is just the inevitable march of the court towards killing the exclusionary rule all together. Others feel these are the necessary and correct interpretations of the Constitution meant to keep us secure through the actions of police. In either case it seems clear that there wasn’t an overarching trend towards a purposeful degradation of integrity or promotion of ignorance with regard to the security of critical law enforcement systems.

But why not? Perhaps it’s because there’s another motivator involved. The police don’t just interact with the courts as a consumer of data to enforce laws, they also place information into those same systems. Lack of integrity works both ways in most cases, and it is naturally in the best interest of the police to have a system that accurately represents the real world. I can’t imagine a cop would be happy if the paperwork they filed to finish off some good police work vanished, or appeared vanished to the prosecution.

And as far as security? Well the same forces likely apply. While it may benefit police to occasionally get a pass based on errors, this doesn’t seem to outweigh the risks of having a system that can be manipulated. So in the end, while Justice Ginsburg makes an insightful point, there may be additional sides to the story that the courts were not asked to consider.

[postscript]

During peer review I was asked, “so what” for the rest of the world that’s not in law enforcement. Fair point. On one hand this was an interesting case in the context of digital forensics and the legal system. The author also likes to consider issues that impact overarching trends in information security, especially when they impact our Freedom. However, that is admittedly self-indulgent and this isn’t a legal blog. If you wanted to abstract a theme to corporate motivations, I’d  ask “Are you considering the value of information relative to ensuring its integrity?”. More specifically, in what situation would you reasonably stand to benefit from the lack of integrity of your information systems? If there’s interest in expanding here leave a note in the comments and we can follow up…