Cached Domain Credentials in Vista/7 (aka why full drive encryption is important)

Recently, I was conducting a security policy audit of a mid-size tech company and asked if they were using any form of disk encryption on their employee’s workstations. They were not, however they pointed me to a policy document that required all “sensitive” files to be stored in an encrypted folder on the User’s desktop. They assumed that this was adequate protection against the files being recovered should the laptop be lost or stolen.

Unfortunately, this is not the case. Without full disk encryption (like BitLocker), sensitive system files will always be available to an attacker, and credentials can be compromised. Since Windows file encryption is based on user credentials (either local or AD), once these creds are compromised, an attacker would have full access to all “encrypted” files on the system. I will outline an attack scenario below to stress the importance of full drive encryption.

 

BACKGROUND

If you are not familiar, Windows has a built in file encryption function called Encrypting File System (EFS) that has been around since Windows 2000. If you right click on a file or folder and go to Properties->Advanced you can check a box called “Encrypt contents to secure data”. When this box is checked, Windows will encrypt the folder and its contents using EFS, and the folder or file will appear green in Explorer to indicate that it is protected:

Encrypted Directory

 

Now only that user will be able to open the file. Even Administrators will be denied from viewing it. Here a Domain Admin (‘God’) is attempting to open the encrypted file that was created by a normal user (‘nharpsis’):

secret_denied_god

 

 

According to Microsoft’s TechNet article on EFS, “When files are encrypted, their data is protected even if an attacker has full access to the computer’s data storage.” Unfortunately, this is not quite true. The encrypted file above (“secret.txt”) will be decrypted automatically and viewable whenever ‘nharpsis’ logs in to the machine. Therefore to view the files, an attacker only needs to compromise the ‘nharpsis’ account.

 

THE ATTACK

In this attack scenario, we will assume that a laptop has been lost or stolen and is powered off. There are plenty of ways to mount an online attack against Windows or extract credentials and secret keys straight from memory. Tools like mimikatz or the Volatility Framework excel at these attacks.

For a purely offline attack, we will boot from a live Kali Linux image and mount the Windows hard drive. As you can see, even though we have mounted the Windows partition and have read/write access to it, we are unable to view files encrypted with EFS:

Permission Denied - Kali

Yes you read that right. We are root and we are seeing a “Permission denied”.

Commercial forensic tools like EnCase have functionality to decrypt EFS, but even they require the username and password of the user who encrypted it. So the first step will be to recover Ned Harpsis’s credentials.

 

Dumping Credentials

There are numerous ways to recover or bypass local accounts on a windows machine. SAMDUMP2 and ‘chntpw’ are included with Kali Linux and do a nice job of dumping NTLM hashes and resetting account passwords, respectively. However, in this instance, and the instance of the company I was auditing, these machines are part of a domain and AD credentials are used to log in.

Windows caches domain credentials locally to facilitate logging in when the Domain Controller is unreachable. This is how you can log in to your company laptop when traveling or on a different network. If any domain user, including admins, have logged in to this machine, his/her username and a hash of his password will be stored in one of the registry hives.

Kali Linux includes the tool ‘cachedump’ which is intended to be used just for this purpose. Cachedump is part of a larger suite of awesome Python tools called ‘creddump’ that is available in a public svn repo: https://code.google.com/p/creddump/

Unfortunately, creddump has not been updated in several years, and you will quickly realize when you try to run it that it does not work on Windows 7:

Cachedump Fail

This is a known issue and is discussed on the official Google Code project.

As a user pointed out, the issue persisted over to the Volatility project and an issue was raised there as well. A helpful user released a patch file for the cachedump program to work with Windows 7 and Vista.

After applying the patches and fixes I found online, as well as some minor adjustments for my own sanity, I got creddump working on my local Kali machine.

For convenience’s sake, I have forked the original Google Code project and applied the patches and adjustments. You can find the updated and working version of creddump on the Neohapsis Github:

https://github.com/Neohapsis/creddump7

 

Now that I had a working version of the program, it was just a matter of getting it on to my booted Kali instance and running it against the mounted Windows partition:

Creddump in action

Bingo! We have recovered two hashed passwords: one for ‘nharpsis’, the user who encrypted the initial file, and ‘god’, a Domain Admin who had previously logged in to the system.

 

Cracking the Hashes

Unlike locally stored credentials, these are not NT hashes. Instead, they are in a format known as ‘Domain Cache Credentials 2′ or ‘mscash2′, which uses PBKDF2 to derive the hashes. Unfortunately, PBKDF2 is a computation heavy function, which significantly slows down the cracking process.

Both John and oclHashcat support the ‘mscash2′ format. When using John, I recommend just sticking to a relatively short wordlist and not to pure bruteforce it.

If you want to attempt to use a large wordlist with some transformative rules or run pure bruteforce, use a GPU cracker with oclHashcat and still be prepared to wait a while.

To prove that cracking works, I used a wordlist I knew contained the plaintext passwords. Here’s John cracking the domain hashes:

Cracked with John

Note the format is “mscash2″. The Domain Admin’s password is “g0d”, and nharpsis’s password is “Welcome1!”

I also extracted the hashes and ran them on our powerful GPU cracking box here at Neohapsis. For oclHashcat, each line must be in the format ‘hash:username’, and the code for mscash2 is ‘-m 2100′:

oclHashcat_cracked

 

 

Accessing the encrypted files

Now that we have the password for the user ‘nharpsis’, the simplest way to retrieve the encrypted file is just to boot the laptop back into Windows and log in as ‘nharpsis’. Once you are logged in, Windows kindly decrypts the files for you, and we can just open them up:

secret_open

 

Summary

As you can see, if an attacker has physical access to the hard drive, EFS is only as strong as the users login password. Given this is a purely offline attack, an attacker has unlimited time to crack the password and then access the sensitive information.

So what can you do? Enforce full drive encryption. When BitLocker is enabled, everything in the drive is encrypted, including the location of the cached credentials. Yes, there are attacks agains BitLocker encryption, but they are much more difficult then attacking a user’s password.

In the end, I outlined the above attack scenario to my client and recommended they amend their policy to include mandatory full drive encryption. Hopefully this straightforward scenario shows that solely relying on EFS to protect sensitive files from unauthorized access in the event of a lost or stolen device is an inadequate control.

 

 

 

Smart TV + Smartphone = Shiny New Attack Surfaces

According to a Gartner report from December 2012, “85 percent of all flat-panel TVs will be Internet-connected Smart TVs by 2016.” Forbes magazine gives some analysis about what is fueling this trend: http://www.forbes.com/sites/michaelwolf/2013/02/25/3-reasons-87-million-smart-tvs-will-be-sold-in-2013/ , The article makes a mention of “DIAL”, an enabling technology for second-screen features (which this post is about).  With these new devices come new risks as evidenced in the following article: https://securityledger.com/2012/12/security-hole-in-samsung-smart-tvs-could-allow-remote-spying/ , as well as more recent research about Smart TV risks presented at the CanSecWest and DefCon security conference this year (2013).

For more details about about exactly what features a Smart TV has above and beyond a normal television, consult this WikiPedia article: http://en.wikipedia.org/wiki/Smart_TV.

This post introduces and describes aspects of “DIAL”, a protocol developed by Google and Netflix for controlling Smart TVs with smart phones and tablets.  DIAL provides “second screen” features, which allow users to watch videos and other content on a TV using a smart phone or tablet. This article will review sample code for network discovery and enumerate Smart TV apps using this protocol.

Part 1: Discovery and Enumeration

Smart TVs are similar to other modern devices in that they have apps. Smart TVs normally ship with an app for YouTube(tm), Netflix(tm), as well as many other built-in apps. If you have a smartphone, then maybe you’ve noticed that when your smartphone and TV are on the same network, a small square icon appears in some mobile apps, allowing you to play videos on the big TV. This allows you to control the TV apps from your smartphone. Using this setup, the TV is a “first screen” device, and the phone or tablet functions as a “second screen”, controlling the first screen.

DIAL is the network protocol used for these features and is a standard developed jointly between Google and Netflix.  (See http://www.dial-multiscreen.org/ ).  DIAL stands for “Discovery and Launch”. This sounds vaguely similar to other network protocols, namely “RPC” (remote procedure call). Basically, DIAL gives devices a way to quickly locate specified networked devices (TVs) and controlling programs (apps) on those devices.

Let’s take a look at the YouTube mobile application to see how exactly this magic happens. Launching the YouTube mobile app with a Smart TV on network (turned on of course) shows the magic square indicating a DIAL-enabled screen is available:

Magic TV Square

Square appears when YouTube app finds TVs on the network.

Clicking the square provides a selection menu where the user may choose which screen to play YouTube videos. Recent versions of the YouTube apps allow “one touch pairing” which makes all of the setup easy for the user:

02_tv_picker

Let’s examine the traffic generated by the YouTube mobile app at launch.

  • The Youtube mobile app send an initial SSDP request, to discover available first-screen devices on the network.
  • The sent packet is destined for a multicast address (239.255.255.250) on UDP port 1900. Multicast is useful because devices on the local subnet can listen for it, even though it is not specifically sent to them.
  • The YouTube app multicast packet contains the string “urn:dial-multiscreen-org:service:dial:1”. A Smart TV will respond to this request, telling YouTube mobile app its network address and information about how to access it.

A broadcast search request from the YouTube mobile app looks like this:

11:22:33.361831 IP my_phone.41742 > 239.255.255.250.1900: UDP, length 125
0x0010: .......l..+;M-SE
0x0020: ARCH.*.HTTP/1.1.
0x0030: .HOST:.239.255.2
0x0040: 55.250:1900..MAN
0x0050: :."ssdp:discover
0x0060: "..MX:.1..ST:.ur
0x0070: n:dial-multiscre
0x0080: en-org:service:d
0x0090: ial:1....

Of course, the YouTube app isn’t the only program that can discover ready-to-use Smart TVs. The following is a DIAL discoverer in a few lines of python. It waits 5 seconds for responses from listening TVs. (Note: the request sent in this script is minimal. The DIAL protocol specification has a full request packet example.)

! /usr/bin/env python
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.settimeout(5.0)
s.sendto("ST: urn:dial-multiscreen-org:service:dial:1",("239.255.255.250",1900))
while 1:
  try:
    data,addr = s.recvfrom(1024)
    print "[*] response from %s:%d" % addr
    print data
  except socket.timeout:
    break

A response from a listening Smart TV on the network looks like:

[*] response from 192.168.1.222:1900
HTTP/1.1 200 OK
LOCATION: http://192.168.1.222:44047/dd.xml
CACHE-CONTROL: max-age=1800
EXT:
BOOTID.UPNP.ORG: 1
SERVER: Linux/2.6 UPnP/1.0 quick_ssdp/1.0
ST: urn:dial-multiscreen-org:service:dial:1
USN: uuid:bcb36992-2281-12e4-8000-006b9e40ad7d::urn:dial-multiscreen-org:service:dial:1

Notice that the TV returns a LOCATION header, with a URL: http://192.168.1.222:44047/dd.xml . The response from reading that URL leads to yet another URL which provides the “apps” link on the TV.

HTTP/1.1 200 OK
Content-Type: application/xml
Application-URL: http://192.168.1.222:60151/apps/
<?xml version="1.0"?><root xmlns="urn:schemas-upnp-org:device-1-0" xmlns:r="urn:restful-tv-org:schemas:upnp-dd”> <specVersion> <major>1</major> <minor>0</minor> </specVersion>
<device> <deviceType>urn:schemas-upnp-org:device:tvdevice:1</deviceType> <friendlyName>Vizio DTV</friendlyName> <manufacturer>Vizio Inc.</manufacturer> <modelName>Vizio_E420i_A0</modelName>
<UDN>uuid:bcb36992-2281-12e4-8000-006b9e40ad7d M-SEARCH * HTTP/1.1
HOST: 239.255.255.250:1900
MAN: "ssdp:discover"
MX: 3
ST: urn:schemas-upnp-org:device:MediaServer:1

At this point, the YouTube mobile app will try to access the “apps” URL combined with the application name with a GET request to: http:://192.168.1.222:60151/apps/YouTube . A positive response indicates the application is available, and returns an XML document detailing some data about the application state and feature support:

HTTP/1.1 200 OK
Content-Type: application/xml

<?xml version="1.0" encoding="UTF-8"?>
<service xmlns="urn:dial-multiscreen-org:schemas:dial">
<name>YouTube</name>
<options allowStop="false"/>
<state>stopped</state>
</service>

Those of you who have been following along may have noticed how easy this has been. So far, we have sent one UDP packet and issued two GET requests. This has netted us:

  • The IP address of a Smart TV
  • The Operating system of a Smart TV (Linux 2.6)
  • Two listening web services on random high ports.
  • A RESTful control interface to the TV’s YouTube application.

If only all networked applications/attack surfaces could be discovered this easily. What should we do next? Let’s make a scanner. After getting the current list of all registered application names (as of Sept 18, 2013)  from the DIAL website, it is straightforward to create a quick and dirty scanner to find the apps on a Smart TV:

#! /usr/bin/env python
#
# Enumerate apps on a SmartTV
# <arhodes@neohapsis.com>
#
import urllib2
import sys
apps=['YouTube','Netflix','iPlayer','News','Sport','Flingo','samba',
'tv.samba','launchpad','Pandora','Radio','Hulu','KontrolTV','tv.primeguide',
'primeguide','Tester','Olympus','com.dailymotion','Dailymotion','SnagFilms',
'Twonky TV','Turner-TNT-Leverage','Turner-TBS-BBT','Turner-NBA-GameTime',
'Turner-TNT-FallingSkies','WiDi','cloudmedia','freeott','popcornhour',
'Turner-TBS-TeamCoco','RedboxInstant','YuppTV.Remote','D-Smart','D-SmartBLU',
'Turner-CNN-TVE','Turner-HLN-TVE','Turner-CN-TVE','Turner-AS-TVE',
'Turner-TBS-TVE','Turner-TNT-TVE','Turner-TRU-TVE','Gladiator','com.lge',
'lge','JustMirroring','ConnectedRedButton','sling.player.googletv','Famium',
'tv.boxee.cloudee','freesat','freetime','com.humax','humax','HKTV',
'YahooScreen','reachplus','reachplus.alerts','com.reachplus.alerts',
'com.rockettools.pludly','Grooveshark','mosisApp','com.nuaxis.lifely',
'lifely','GuestEvolutionApp','ezktv','com.milesplit.tv','com.lookagiraffe.pong',
'Crunchyroll','Vimeo','vGet','ObsidianX','com.crossproduct.caster',
'com.crossproduct.ether','Aereo','testapp3e','com.karenberry.tv',
'cloudtv','Epictv','QuicTv','HyperTV','porn','pornz','Plex','Game',
'org.enlearn.Copilot','frequency', 'PlayMovies' ]
try:
  url = sys.argv[1]
except:
  print "Usage: %s tv_apps_url" % sys.argv[0]
  sys.exit(1)

for app in apps:
  try:
    u = urllib2.urlopen("%s/%s"%(url,app))
    print "%s:%s" % ( app, repr(str(u.headers)+u.read()) )
  except:
    pass

Some of those app names appear pretty interesting. (Note to self: Find all corresponding apps.) The scanner looks for URLs returning positive responses (200 result codes and some XML), and prints them out:

 $ ./tvenum.py http://192.168.1.222:60151/apps/ 

YouTube:'Content-Type: application/xml\r\n<?xml version="1.0" encoding="UTF-8"?>\r\n<service xmlns="urn:dial-multiscreen-org:schemas:dial">\r\n  <name>YouTube</name>\r\n  <options allowStop="false"/>\r\n  <state>stopped</state>\r\n</service>\r\n'

Netflix:'Content-Type: application/xml\r\n<?xml version="1.0" encoding="UTF-8"?>\r\n<service xmlns="urn:dial-multiscreen-org:schemas:dial">\r\n  <name>Netflix</name>\r\n  <options allowStop="false"/>\r\n  <state>stopped</state>\r\n</service>\r\n'

Hopefully this article has been informative for those who may be looking for new devices and attack surfaces to investigate during application or penetration testing.

 

Signatures or PINs? EMV is Coming

Whether you are a seasoned, international road warrior, or a domestic suburbanite, new security features will soon be showing up on a credit card near you. In light of recent card data compromises, there’s a new drive to adopt credit card security technologies known as “Chip and PIN” (typically noted as “chip/PIN”) to better secure credit card data against fraud or compromise. While chip/PIN is new to most U.S. cardholders, it is the norm across most of Europe, Canada, and Mexico. There have been many initiatives in the last several years to drive U.S. payment card systems towards more secure technologies, but only now is adoption of chip/PIN starting to get increased traction across the U.S. payment card industry.

For individual card holders, these developments are important, and in this post we will cover some of the key points of these technologies.

 

First, what exactly is chip/PIN and what does it do to protect credit card data?

In a chip/PIN environment, when purchasing goods at a point of sale (POS) device, the credit card is inserted or “dipped” into a card reading device—not swiped as it is in the U.S. Once inserted, the customer inputs a PIN which authenticates the cardholder against the chip embedded on the card. Upon successful authentication, the chip generates the data necessary to complete the transaction and transmits the data for authorization.

Before we get too far into the discussion about chip/PIN, there is one point that needs to be clarified: The chip component of chip/PIN cards is sometimes referred to as “EMV data” or “EMV transactions” in the payment industry. The term EMV (for Europay, MasterCard and Visa) refers to a standard definition for chip-based payment cards, or “chip cards”—also referred to as “IC (integrated circuit) cards” as defined by EMVCo LLC. EMV is the basis for the chip/PIN implementation throughout Europe, and is planned for implementation in the U.S. (more on that, below). In short, EMV refers to the “chip” portion of chip/PIN cards, with the “PIN” implementation being a separate matter entirely.

Why is this relevant? Because much of what has been discussed thus far about implementing chip cards in the U.S. is focused primarily on the “chip” component, and does not necessarily include the “PIN” component that is otherwise present in Europe’s EMV environment. In lieu of using a PIN to authenticate the chip card, discussions in the U.S. have leaned toward reliance on manual signature verification (such as when a clerk compares the signature on the receipt to the signature on the card). As a result, the U.S. implementation will likely wind up being referred to as “chip and signature” or “chip/signature.”

 

What’s the difference between chip/PIN and chip/signature?

From the merchant’s perspective the credit-card payment process wouldn’t change significantly, outside of likely hardware upgrade requirements. And from the processor’s perspective, there really isn’t a difference, as long as they process or support transactions using EMV, or “track-equivalent data.”

Track-equivalent data is the data — including cryptographic data — used for transaction authentication and authorization within EMV environments. It is generated by the on-board integrated circuit, or the “chip,” on the card itself—not the card-reading device. This is not to say that track-equivalent data is “secure” in-and-of-itself. Because of some of the underlying functional requirements, track-equivalent data typically includes certain discretionary data elements, some of which are sensitive in nature and cannot be stored (something merchants should note).

From the cardholder perspective, however, there is one notable difference and that is the requirement of a PIN or signature to verify that the person holding the card is the actual card owner.

 

Is chip/PIN more or less secure than chip/signature?

That depends.

In a chip/PIN scenario, the PIN is used to authenticate the cardholder against the information stored on the chip. If you don’t know the PIN, the chip won’t give up the information necessary to complete the transaction. In a chip/signature scenario (theoretically speaking), the clerk responsible for completing the transaction would be required to validate the customer signature on the receipt with their signature on the card. If your signature doesn’t match sufficiently enough per the clerk’s perusal, they won’t complete the transaction. Say what you will about how consistently the practice of signature verification is actually practiced, versus how it is supposed to in theory, there are equally compelling arguments for either approach.

In a chip/PIN environment, as long as the cardholder’s PIN is kept secret, it would be theoretically impossible for someone to use a stolen card to perform fraudulent card-present transactions. It is because of the PIN requirement that card criminals have evolved their data collection strategies to include video surveillance targeting PIN entry devices, such as at ATMs and retail point-of-sale devices, to collect customer PINs. Once the PIN is compromised, the card can be used for fraudulent transactions. On the other hand, I can show my signature around to anyone, put it on all my receipts, etc., and the likelihood of anyone being able to reliably reproduce it on demand is pretty slim (expert forgers, excluded). Ultimately, the question boils down to this: Which is a more secure means to verify that a credit card belongs to the person holding the card?

 

Conclusion

It can be erroneously concluded that U.S. implementation of EMV heading in the direction of chip/signature undermines many of the anti-fraud security protections of chip/PIN. However, when the issue is considered from multiple sides, especially in putting everything together for this article, the more it is clear that there is no significant security benefit of one solution over the other.  Whether it is PIN or signature, the control is only used to authenticate the cardholder—the rest is about implementing security controls via EMV and integrated circuit cards that has nothing to do with either PINs or signatures. Until there is historical data to demonstrate the effectiveness or ineffectiveness of signatures vs. PINs in reducing card fraud, the jury is still out on which solution offers a significant upside over alternatives.

Ultimately, whether cards are authenticated via PIN or signature, the chip-based credit cards being rolled out in the U.S. will rely upon EMV security measures to protect the security of credit card data. These technologies provide a solid foundation for improving the overall security of credit card information and limiting fraud and misuse of compromised credit card data.

 

Resources

EMVCo LLC Website: http://www.emvco.com/

Wikipedia: EMV http://en.wikipedia.org/wiki/EMV

Putting Windows XP Out To Pasture

Farewell, Windows XP! We hated you, then loved you, and soon we’ll hate you again.

 (This post is a resource for home and small-business users with questions about the impending end-of-life for Windows XP. Larger enterprise users have some different options available to them; contact us to discuss your situation and options.)

For those who haven’t seen it in the news yet: Microsoft will be ending support for its hugely successful operating system, Windows XP, on April 8th. This means that users of the 12-year-old operating system will no longer be able to get updates, and in particular will not be able to get security updates. Users of more modern versions of Windows, such as Windows Vista or Windows 7 will remain supported for several more years.

Once support ends, computers still on Windows XP will become a very juicy target for Internet criminals and attackers. Internet crime is big business, so every day there are criminals looking for new weaknesses in computer systems (called vulnerabilities), and developing attacks to take advantage of them (these attacks are called exploits). Normally, the software vendor (Microsoft in this case) quickly finds out about these weaknesses and releases updates to fix them. When an exploit is developed, some number of people fall victim shortly after the exploit is first used, but people who get the update in a relatively timely manner are protected.

But what happens when a vendor stops updating the software? All of a sudden, the bad guys can use these same attacks, the same exploits, indefinitely. As a product nears end of life, attackers have an incentive to hold off on using critical vulnerabilities until the deadline passes. The value of their exploits goes up significantly once they have confidence that the vendor will never patch it. Based on that, we can expect a period of relative quiet in terms of announced vulnerabilities affecting XP from now until shortly after the deadline, when we will likely see stockpiled critical vulnerabilities begin circulating. From then on, the risk of these legacy XP systems will continue to increase, so migrating away from XP or dramatically isolating the systems should be a priority for people or organizations that still use them.

How do I know if I’m running Windows XP?

  • If your computer is more than 5 years old, odds are it is running Windows XP
  • Simplest way: “Win+Break”: Press and hold down the Windows key on your keyboard, then find the “Pause” or “Break” key and press it. Let both keys go. That will show the System Properties windows. You may have to hunt around for your “Pause/Break” key, but hey, it finally has a use.
  • Alternate way: Click the Start Menu -> Right click on “My Computer” -> On the menu that comes out, click on Properties

 

Image

Click the Start Menu, then right-click My Computer, then click Properties.

 

Image

Your version of Windows will be the first thing on the System Properties window.

 

How do I stay safe?

Really, you should think about buying a new computer. You can think of it as a once a decade spring cleaning. If your computer is old enough to have Windows XP, having an unsupported OS is likely just one of several problems. It is possible to upgrade your old computer to a newer operating system such as Windows 7, or convert to a free Linux-based operating system, but this may be a more complicated undertaking than many users want to tackle.

Any computer you buy these days will be a huge step up from a 7-year old (at least!) machine running XP, so you can comfortably shop the cheapest lines of computers. New computers can be found for $300, and it’s also possible to buy reputable refurbished ones with a modern operating system for $100-$200.

For those who really don’t want to or can’t upgrade, the situation isn’t pretty. Your computer will continue to work as it always has, but the security of your system and your data is entirely in your hands. These systems have been low-hanging fruit for attackers for a long time, but after April 8th they will have a giant neon bull’s-eye on them.

There are a few things you can do to reduce your risks, but there really is no substitute for timely vendor patches.

  1. Only use the system for tasks that can’t be done elsewhere. If the reason for keeping an XP machine is to run some specific program or piece of hardware, then use it only for that. In particular, avoid web browsing and email on the unsupported machine: both activities expose the vulnerable system to lots of untrusted input.
  2. Keep all of your other software up to date. Install and use the latest version of Firefox or Chrome web browsers, which won’t be affected by Microsoft’s end of life.
  3. Back up your computer. There are many online backup services available for less than $5 a month. If something goes wrong, you want to make sure that your data is safe. Good online backup services provide a “set it and forget it” peace of mind. This is probably the single most important thing you can do, and should be a priority even for folks using a supported operating system. Backblaze, CrashPlan, and SpiderOak are all reasonable choices for home users.
  4. Run antivirus software, and keep it up to date. AVAST, AVG, and Bitdefender are all reasonable free options but be aware that antivirus is only a layer of protection: it’s not perfect.

 

What Kickstarter Did Right

Only a few details have emerged about the recent breach at Kickstarter, but it appears that this one will be a case study in doing things right both before and after the breach.

What Kickstarter has done right:

  • Timely notification
  • Clear messaging
  • Limited sensitive data retention
  • Proper password handling

Timely notification

The hours and days after a breach is discovered are incredibly hectic, and there will be powerful voices both attempting to delay public announcement and attempting to rush it. When users’ information may be at risk beyond the immediate breach, organizations should strive to make an announcement as soon as it will do more good than harm. An initial public announcement doesn’t have to have all the answers, it just needs to be able to give users an idea of how they are affected, and what they can do about it. While it may be tempting to wait for full details, an organization that shows transparency in the early stages of a developing story is going to have more credibility as it goes on.

Clear messaging

Kickstarter explained in clear terms what was and was not affected, and gave straightforward actions for users to follow as a result. The logging and access control groundwork for making these strong, clear statements at the time of a breach needs to be laid far in advance and thoroughly tested. Live penetration testing exercises with detailed post mortems can help companies decide if their systems will be able to capture this critical data.

Limited sensitive data retention

One of the first questions in any breach is “what did they get?”, and data handling policies in place before a breach are going to have a huge impact on the answer. Thinking far in advance about how we would like to be able to answer that question can be a driver for getting those policies in place. Kickstarter reported that they do not store full credit card numbers, a choice that is certainly saving them some headaches right now. Not all businesses have quite that luxury, but thinking in general about how to reduce the retention of sensitive data that’s not actively used can reduce costs in protecting it and chances of exposure over the long term.

Proper password handling (mostly)

Kickstarter appears to have done a pretty good job in handling user passwords, though not perfect. Password reuse across different websites continues to be one of the most significant threats to users, and a breach like this can often lead to ripple effects against users if attackers are able to obtain account passwords.

In order to protect against this, user passwords should always be stored in a hashed form, a representation that allows a server to verify that a correct password has been provided without ever actually storing the plaintext password. Kickstarter reported that their “passwords were uniquely salted and digested with SHA-1 multiple times. More recent passwords are hashed with bcrypt.” When reading breach reports, the level of detail shared by the organization is often telling and these details show that Kickstarter did their homework beforehand.

A strong password hashing scheme must protect against the two main approaches that attackers can use: hash cracking, and rainbow tables. The details of these approaches have been well-covered elsewhere, so we can focus on what Kickstarter used to make their users’ hashes more resistant to these attacks.

To resist hash cracking, defenders want to massively increase the amount of work an attacker has to do to check each possible password. The problem with hash algorithms like SHA1 and MD5 is that they are too efficient; they were designed to be completed in as few CPU cycles as possible. We want the opposite from a password hash function, so that it is reasonable to check a few possible passwords in normal use but computationally ridiculous to try out large numbers of possible passwords during cracking. Kickstarter indicated that they used “multiple” iterations of the SHA1 hash, which multiplies the attacker effort required for each guess (so 5 iterations of hashing means 5 times more effort). Ideally we like to see a hashing attempt take at least 100 ms, which is a trivial delay during a legitimate login but makes large scale hash cracking essentially infeasible. Unfortunately, SHA1 is so efficient that it would take more than 100,000 iterations to raise the effort to that level. While Kickstarter probably didn’t get to that level (it’s safe to assume they would have said so if they did), their use of multiple iterations of SHA1 is an improvement over many practices we see.

To resist rainbow tables, it is important to use a long, random, unique salt for each password. Salting passwords removes the ability of attackers to simply look up hashes in a precomputed rainbow tables. Using a random, unique salt on each password also means that an attacker has to perform cracking on each password individually; even if two users have an identical password, it would be impossible to tell from the hashes. There’s no word yet on the length of the salt, but Kickstarter appears to have gotten the random and unique parts right.

Finally, Kickstarter’s move to bcrypt for more recent passwords is particularly encouraging. Bcrypt is a modern key derivation function specifically designed for storing password representations. It builds in the idea of strong unique salts and a scalable work factor, so that defenders can easily dial up the amount computation required to try out a hash as computers get faster. Bcrypt and similar functions such as PBKDF2 and the newer scrypt (which adds memory requirements) are purpose built make it easy to get password handling right; they should be the go-to approach for all new development, and a high-priority change for any codebases still using MD5 or SHA1.

On NTP distributed denial of service attacks

NTP, network time protocol, is a time synchronization protocol that is implemented on a network protocol called UDP. UDP is designed for speed at the cost of simplicity, which plays into the inherent time-sensitivity (or specifically, jitter sensitivity) of NTP. Time is an interesting scenario in computer security. Time isn’t exactly secret; it has relatively minor confidentiality considerations, but in certain uses it’s exceedingly important that multiple parties agree on the time. Engineering, space technology, financial transactions and such.

At the bottom is a simple equation:

denial of service amplification = bytes out / bytes in

When you get to a ratio > 1, a protocol like NTP becomes attractive as a magnifier for denial of service traffic.

UDP’s simplicity makes it susceptible to spoofing. An NTP server can’t always decide whether a request is spoofed or not; it’s up to the network to decide in many cases. For a long time, operating system designers, system implementers, and ISPs did not pay a lot of attention to managing or preventing spoofed traffic. It was and is up to millions of internet participants to harden their networking configuration to limit the potential for denial of service amplification. Economically there’s frequently little incentive to do so – most denial of service attacks target someone else, and the impact to being involved as a drone is relatively minor. As a result you get systemic susceptibility.

My advice is for enterprises and individuals to research and implement network hardening techniques on the systems and networks they own. This often means tweaking system settings, or in certain cases may require tinkering with routers and switches. Product specific hardening guides can be found online at reputable sites. As with all technology, the devil is in the details and effective management is important in getting it right.

Gutting a Phish

In the news lately there have been countless examples of phishing attacks becoming more sophisticated, but it’s important to remember that entire “industry” is a bell curve: the most dedicated attackers are upping their game, but advancements in tooling and automation are also letting many less sophisticated players get started even more easily. Put another way, spamming and phishing are coexisting happily as both massive multinational business organizations and smaller cottage-industry efforts.

One such enterprising but misguided individual made the mistake of sending a typically blatant phishing email to one of our Neohapsis mailing lists, and someone forwarded it along to me for a laugh.

Initial Phish Email

The phishing email, as it appeared in a mailbox

As silly and evident as this is, one thing I’m constantly astounded by is how the proportion of people who will click never quite drops to zero. Our work on social engineering assessments bears out this real world example: with a large enough sample set, you’ll always hook at least one. In fact, a paper out of Microsoft Research suggests that, for scammers, this sort of painfully blatant opening is actually an intentional tool: it acts as a filter that only the most gullible will pass.

Given the weak effort put into the email, I was curious to see if the scam got any better if someone actually clicked through. To be honest, I was pleasantly surprised.

Phish Site

The phishing site: a combination of legitimate Apple code and images and a form added by the attacker

The site is dressed up as a reasonable approximation of an official Apple site. In fact, a look at the source shows that there are two things going on here: some HTML/CSS set dressing and template code that is copied directly from the legitimate Apple site, and the phishing form itself which is a reusable template form created by one of the phishers.

Naturally, I was curious where data went once the form was submitted. I filled in some bogus data and submitted it (the phishing form helpfully pointed out any missing data; there is certainly an audacity in being asked to check the format of the credit card number that’s about to be stolen). The data POST went back to another page on the same server, then quickly forwarded me on to the legitimate iTunes site.

Submit and Forward Burp -For Blog

This is another standard technique: if a “login” appears to work because the victim was already logged in, the victim will often simply proceed with what they were doing without questioning why the login was prompted in the first place. During social engineering exercises at Neohapsis, we have seen participants repeatedly log into a cloned attack site, with mounting frustration, as they wonder why the legitimate site isn’t showing them the bait they logged in for.

Back to this phishing site: my application security tester spider senses were tingling, so I felt that I had to see what our phisher was doing with the data being submitted. To find out, I replayed the submit request with various types of invalid data, strings that should cause errors depending on how the data was being parsed or stored. Not a single test string produced any errors or different behavior. This could be an indication that any parsing and processing is being done carefully and correctly, but the far more likely case is that they’re simply doing no processing and dumping it all straight out as plain text.

Interesting… if harvested data is just being simply dumped to disk, where exactly is it going? Burp indicates that the data is being POSTed to a harvester script at Snd/Snd.php. I wonder what else is in that directory?

directory listing

Under the hood of the phishing site, the loot stash is clearly visible

That results.txt file looks mighty promising… and it is.

result.txt

The format of the result.txt file

These are the raw results dumped from victims by the harvester script (Snd.php). The top entry is dummy data that I submitted, and when I checked it, the file was entirely filled with the various dummy submissions I had done before. It’s pretty clear from the results that I was the first person to actually click through and submit data to the phish site; actually pretty fortunate, because if a victim did enter legitimate information, the attacker would have to sort it out from a few hundred bogus submissions. Any day that we can make life harder for the the bad guys is a good day.

So, the data collection is dead simple, but I’d still like to know a bit more about the scam and the phishers if possible. There’s not a lot to go on, but the tag at the top of each entry seems unique. It’s the sort of thing we’re used to seeing when hackers deface a website and leave a tag to publicize the work:

------------+| $ o H a B  Dz and a m i r TN |+------------

Googling some variations turned up Google cache of a forum post that’s definitely related to the phishing site above; it’s either the same guy, or someone else using the same tool.

AppleFullz Forum post

A post in a carder forum, offering to sell data in the same format as generated by the phishing site above

A criminal using the name AppleFullz is selling complete information dumps of login details and credit card numbers plus CVV numbers (called “fulls” in carder forums) captured in the exact format that the Apple phish used, and even provides a sample of his wares (Insult to injury for the victim: not only was his information stolen, but it’s being given away as the credit card fraud equivalent of the taster trays at the grocery store). This carder is asking for $10 for one person’s information, but is willing to give bulk discounts: $30 for 5 accounts (This is actually a discount over the sorts of prices normally seen on carder forums; Krebs recently reported that Target cards were selling for $20-$100 per card. I read this as an implicit acknowledgement by our seller that this data is much “dirtier” and that the seller is expecting buyers to mine it for legitimate data). The tools being used here are a combination of some pre-existing scraps of  PHP code widely used in other spam and scam campaigns (the section labeled “|INFO|VBV|”), and a separate section added specifically to target Apple ID’s.

Of particular interest is that the carder provided a Bitcoin address. For criminals, Bitcoin has the advantage of anonymity but the disadvantage that transactions are public. This means that we can actually look up how much money has flowed into that particular Bitcoin address.

blockchain

Ill-gotten gains: the Bitcoin blockchain records transfers into the account used for selling stolen Apple Id’s and credit card numbers.

From November 17, when the forum posting went up, until December 4th, when I investigated this phishing attempt, he has received Bitcoin transfers totaling 0.81815987 BTC, which is around $744.53 (based on the BTC value on 12/4). According to his price sheet, that translates to a sale of between 74 and 124 records: not bad for a month of terribly unsophisticated phishing.

Within a few hours of investigating the initial phishing site, it had been removed. The actual server where the phish site was hosted was a legitimate domain that had been compromised; perhaps the phisher noticed the volume of bogus traffic and decided that the jig was up for that particular phish, or the system administrator got tipped off by the unusual traffic and investigated. Either way the phish site is offline, so that’s another small victory.

5 Tips for Safer Online Shopping

Use good password practices

No surprise here – it seems to be on the top of every list of this kind, but people still don’t listen. Passwords are still (and will continue to be) the weakest form of authentication. In a perfect security utopia passwords would not exist, but since we’re not there (yet) everyone relies on them. The two main rules on passwords are: make them complex, and make them unique. Complex doesn’t necessarily mean you need thirty random character monstrosities that only a savant could remember, but avoid dictionary words and don’t think that you’re safe by just appending numbers or special characters. The first thing an attacker will do is take every English word in the dictionary and append random characters to the end of it. Yep, “password1989!” is just as (in)secure as “password”. Lastly, passwords should be unique to each site. This is an even bigger sin that most people (myself included) are guilty of. We have one good password so we use it for everything. The problem with this is obvious: if it gets compromised an attacker has access to everything. When LinkedIn’s passwords were compromised last year I realized I was using the same password for all my social media accounts, leaving all those vulnerable too. You don’t need to make an attacker’s job easier for him or her by reusing passwords. Make them work for each one they need to crack.

Store sensitive data in secure locations

Hopefully, you’ve followed the first rule and have unique, complex passwords for every site you visit. Now, how to remember them all? This is where I love to recommend password managers. Password managers securely store all your log in information in an easily accessible location. I emphasize “securely” here, because I see far too many people with word documents called “My Passwords” or the like sitting on their desktops. This is a goldmine for any attacker who has access to it. I’ve even seen theses “password” files being shared unencrypted in the cloud, so people can pull them up on their phones or tablets to remember their passwords on-the-go. Please don’t do this. Now if you lose your phone you also lose every password to every secure site you have.

Instead, use a password manager like 1PasswordLastPass, or KeepPass to name a few popular ones. These encrypt and store your sensitive information (not just passwords, but also SSNs, CC numbers, etc..) in an easy to access format. You encrypt your “wallet” of passwords with one very secure password (the only one you ever need to remember), and can even additionally encrypt them with a private key. A private key works just like a physical key – you need a copy of it to access the file. Keep it on a USB stick on your keychain and a backup in a fire-proof safe.

Watch out for HTTP(S)

Ever notice how some sites start with https:// as opposed to http:// ? That little ‘s’ at the end makes a whole world of difference. When it’s present it means that you have established a trusted and encrypted connection with the website. Its security purpose is two-fold: all data between you and the site is encrypted and cannot be eavesdropped, and you have established through a chain of trust that the website you are visiting is, in fact, who they say they are.

For example, this is what the address bar on Firefox looks like when I have a secure connection to Bank of America:

https bar

Notice the ‘https’ and the padlock icon. If you are ever on a webpage that is asking you to enter sensitive information (like a password) and you don’t see something similar, don’t enter it! There could be any number of reasons why you are not connected via HTTPS, including benign ones, but it’s better to be safe than sorry. Likewise, if you ever receive a warning from your browser like this:

https error

It means that the browser cannot verify the website is actually who it says it is. Phishing sites can imitate legitimate logins down to the smallest detail, but they cannot imitate their SSL certificate. If you see this type of warning when trying to access a well-known site, get out immediately! There could be legitimate problem with the website or your browser, but more likely somebody is impersonating them and trying to fool you!

Install those nagging updates

Microsoft actually does an excellent job of patching vulnerabilities when they arise; the problem is most people don’t install them. Every other Tuesday new patches and updates are released to the public. Microsoft will also release patches out-of-bounds (OOB), meaning as needed and not waiting for the next Tuesday, for serious vulnerabilities. These patches are a great way to fix security holes but also offer a nasty catch. Attackers use these patches to see where the holes were.

Every “Patch Tuesday” attackers will reverse engineer the Windows updates to discover new vulnerabilities and then attempt to target machines that have not applied the update yet. It’s akin to a car manufacturer releasing a statement saying “this year and model car can be unlocked with a toothpick, so apply this fix.” Now every car thief in the world knows to look out for that year and model, and if the fix hasn’t been applied they know to try a toothpick.

This is why it’s imperative to keep your computer up to date. The “Conficker” worm that ran rampant in 2009 exploited a security vulnerability that was patched by Microsoft almost immediately. Part of the reason it spread so successfully was people’s reluctance to install new Windows updates. It preyed on out-of-date systems.

Likewise, many online exploits will use common vulnerabilities found in different software, like Flash, Java, or even the browsers. When software that you use online prompts you to install an update – do it!

So the next time your computer asks you to restart to install updates, go grab a cup of coffee and let it do its thing. It’ll save you in the long run.

(note: Mac users are not exempt! Install those updates from Apple as well!)

It’s okay to be a little paranoid

My last tip is more of a paradigm shift than a tip for when you are conducting business online. It’s okay to be a little paranoid. The old mantra “if it’s too good to be true, it probably is” has never been more applicable when it comes to common phishing schemes. I’m sure most people know by now to not trust a pop-up that says “You’ve won an iPad – click here!”, but modern phishing techniques are much more subtle – and much more dangerous.

One of the only times I’ve ever fallen victim to a phishing scheme was when “Paypal” emailed me asking me to confirm a large purchase because it was suspicious. Since I didn’t make the order I immediately thought I had been compromised. I went into panic mode, clicked the link, entered my password….and, wait, I just entered my Paypal password into a site I don’t even recognize. They got me.

It’s okay to mistrust emails and links. If something seems phishy (pun intended) then exit out. Services like Paypal and online banks will never ask for personal information over email, chat, or any avenue besides their main website. If you have an issue, go to their website, ensure that ‘s’ is in your address bar, and do your business from there. If you’re still not convinced, find their 800 number and call them. The point is, if I had stayed calm for a second and thought it was strange Paypal was asking me to urgently log in via an email message, I would have gathered myself, gone to their official site to log in and then looked for any alerts or suspicious activity. I could have even called them.

Trying not to sound too misanthropic here, but when it comes to dealing with sensitive information online it’s better to not trust someone initially then it is to trust them implicitly. Your bank account information won’t be deleted and nothing bad will happen if you don’t immediately update your password, so take a second to make sure what you’re doing is actually legit.

Configuration Assurance: Evolving Security Beyond the Basics

Perhaps it is more appropriate for us to approach configuration management as an assurance process meant to ensure system integrity is maintained over time. By evolving our view of how to establish and control the integrity of the different devices and technologies in use, the concept of “configuration management” evolves to become more about “configuration assurance.”

The Need to Manage Configuration 

When considering the different aspects of information security program management, few topics are of as much importance to an organization’s overall security posture as the topic of “configuration management.” This is due, in part, to the number of different standards and processes that typically comprise or govern a configuration management program. And, it is usually the lack of governance or enforcement of configuration management practices that lead to system and information compromises.

When we look at configuration management, it is important for us to keep in mind that what we’re really addressing is the “I” of InfoSec’s “Confidentiality, Integrity, and Availability,” or “C.I.A.” Because of this, we should understand each of the different parts that make up a configuration management program or process, and further understand them as part of an overall process for ensuring the integrity of any given device or system. Ultimately, the basis for establishing and verifying the integrity of a device or system needs to be consistent with the information security standards defined by the organization, industry best practices, industry or governmental regulation, and relevant legislative requirements.

The Basics of Configuration Management 

The objective of any meaningful configuration management program is a security-minded framework within which all information systems can be tracked, classified, reviewed, analyzed, and maintained according to a consistent set of practices and standards. Configuration management programs usually incorporate several different standards and processes to address the diverse aspects of information security, such as standard build/configuration documentation and processes, antivirus monitoring, patch management, vulnerability management, asset management, etc. Essentially, it comes down to having lot of eggs that ultimately wind up in the same basket, with the objective being that none of the egg shells get broken.

At a high level, the functional and security requirements for most of these programs and services are fairly well understood. It is common for organizations to treat each of the different aspects of configuration management as stand-alone programs or processes. However, reality is quite different. In addition to ensuring that a configuration management program addresses all of the relevant security requirements, it is also equally necessary to understand how each individual security process or program relates to other security processes or programs. Why? Because each of the processes associated with configuration management impacts other processes related to configuration management. The manner in which these interrelationships are addressed (or not addressed) may expose significant risks in critical or sensitive information systems.

Regardless, many organizations still tend to approach delivery of these programs and services as individual and somewhat isolated or unrelated processes. This is especially true for organizations that heavily focus on meeting compliance requirements without embracing the larger concept of “information security.” This is also true in organizations where information security programs are less mature, or if there is an over-reliance on technology in the absence of formal documentation.

Where the Gaps May Lie

Following are a couple of examples where gaps might typically occur in the configuration management process. After each example, I’ve put together a few follow-up questions to help explore each issue a little more in-depth.

A. Auditing and Log Monitoring – Most security policies and system configuration standards tend to address audit and logging requirements at the operating system level. However, operating system audit log services are not always capable of capturing detailed audit log data generated by some applications or services. As a result, it may be necessary to combine and correlate multiple audit log data sources (perhaps from multiple devices) to reconstruct a specific chain of events. All business processes should be reviewed to ensure that the full complement of required audit log data is being collected and reviewed.

  1. Do your policies, standards, and processes ensure that all required security audit log data is collected for any and all firewalls/routers, workstations, critical/sensitive applications, databases, monitoring technologies, and other relevant security devices or technologies used in the environment?
  2. Do policies or standards require audit log data collection to include audit log data from all antivirus endpoints, file integrity monitoring endpoints, IDS/IPS alerts and events, security devices or applications, and file or database access?
  3. Is all audit log data, of all types, collected to a single or centralized source(s)?
  4. Is all audit log data backed up regularly (at least daily) and protected against unauthorized access or modification?
  5. Is audit log data from one source combined and correlated with audit log data from other devices or services to reconstruct specific activities, identify complex attacks, and/or raise appropriate alerts?
  6. Has your organization performed any testing or forensic activities to verify that audit log information currently being collected is sufficient to raise appropriate alerts and reconstruct the events related to any suspicious activity?

B. Standard Build/Configuration – It is commonplace for organizations to have standards documentation describing how to install and configure the different kinds of operating systems (and sometimes databases) used in the environment. However, it is not quite so common to have similar documentation (or similar level of detail) when it comes to some specialized technologies or functions. As we are all aware, a secure technical environment is reliant upon more than just securing the operating systems and extends to all devices in use. Policies, standards, and processes should exist to address all technologies used in the environment and should define how to establish, maintain, and verify the integrity of any device or application intended for use within the environment.

  1. Do documentation and processes currently exist to define the secure initial configuration of all technology device types and applications in use in the environment? This includes technologies or devices such as firewalls, routers, servers, databases, mainframe/mid-range, wireless technologies and devices, mobile computing devices (laptops and smartphones), workstations, point-of-interaction devices, IVR systems, and any other technologies related to establishing, enforcing, or monitoring security posture or controls.
  2. Are configuration standards cross-checked to ensure that all relevant information security subject areas are addressed or appropriately cross-referenced? For example, do OS configuration standards include details for installing antivirus or other critical software (FIM, patch management, etc.)? If not, is a reference provided to supporting documentation that details how to install antivirus or other critical software for each specific operating system type?
  3. Do documentation and processes currently exist to define not just the secure configuration of the base operating system, but also to define a minimum patch level or version a system must meet (e.g., “Win7 SP2″ or “Apache version X.X.Y”) before being permitted to connect to the network environment?

These are clearly not all of the possible intersections or gaps that might occur in how an organization approaches configuration management. In developing an information security program, each organization will need to identify the relevant services, processes, and programs that represent how configuration management is achieved. As part of a process of constant improvement, the next logical step would then be to take a closer look at the internal process interrelationships and try to identify any gaps that might exist.

Where to from here?

By evolving our view of how to establish and control the integrity of the different devices and technologies, the concept of “configuration management” evolves to become more about “configuration assurance.” Instead of approaching configuration management as a somewhat unregulated process kept in check by periodic review (audit), perhaps it is more appropriate for us to approach configuration management as an assurance process meant to ensure system integrity is maintained over time.

In the end, one of the biggest enemies of information security is time. Because even if you have bullet-proof security controls in place today, they will probably not offer much protection  against the vulnerability/exploit that a hacker will identify tonight or a vendor will announce tomorrow (or Tuesday).

Security Best Practices With Mobile Hotspots

The use of mobile hotspots has skyrocketed over the last couple years, and with the release of 4G, it’s pretty obvious why. Not only for the added mobility, either. I can personally always rely on my smartphone’s 4G service being fasterand more reliable, on top of possibly being more secure than the shared hotel or coffee shop wireless. I say possibly, because just enabling your mobile hotspot feature on your phone doesn’t necessarily make it a more secure option. In fact, if you’re using the default settings for your mobile hotspot, it’s very likely this isn’t the case. This brings about the usual risks with an attacker having unauthorized access, with the addition to risk to data usage. Unless you’ve got an unlimited data plan (do those even exist?), an attacker can potentially cause data usage by going over your allotted monthly limit.

Configure Your Mobile Hotspot

With that, I would like to provide some tips and best practices to ensure that you are secure when using your mobile hotspot. Some of these tips aren’t new in securing access points in general, so they likewise apply, however I have tailored these recommendations to be more specific to hotspots. So, here they are, without any particular order of importance:

1. Use Obscure SSIDs

Before I dive into this, let me explain a little on what I mean by an SSID. SSID is an acronym which stands for “Service Set Identification,” which, as most technical acronyms, doesn’t do a great job explaining what it actually is. Simply put, the SSID is just the name you are going to give your hotspot so that your other devices can identify it. For example, when you go to pick a wireless network, all the wireless network names you see out there are their SSIDs. With that said, most hotsposts will come with defaults, usually on the realm of “Verizon Mobile Hotspot”, for example. It’s best to avoid using the default names such as “Verizon Mobile Hotspot” or even custom ones such as “Tammy’s iPhone” for your SSID, as these names give an attacker some idea of the service and/or model device being used, which in turn allows them to target based on the default settings for these devices. Take this opportunity to use something creative, such as “NSA Surveillance”. Attackers will have a more difficult time profiling your hotspot, plus you’ll probably give some people a good chuckle. Another port to note regarding your SSID is that hiding your SSID isn’t necessary, as an attacker would be able to discover your “hidden” SSID with little effort, anyway.

Obscure SSID

2. WPA2 Security – Always

Most hotspots will give you the option to change the security encryption being used. This typically ranges from options such as Open, which is no encryption or passcode needed, all the way to WPA2 PSK, which is the latest standard and uses a very high level of encryption. Always be sure to use WPA2 security when setting up a hotspot. All smartphones that provide hotspot functionality should have this as an option, and if they don’t, it’s probably time to upgrade it to one that does. WPA is the only exception to this in that even though it’s not as secure as WPA2, it’s still very secure when combined with a complex enough passcode (which is covered next). WEP encryption should be completely avoided since anyone with $30 and access to Google can gain step-by-step instructions on how to crack the passcode within about 5 minutes. If your hotspot supports the option to use WPS, it’s recommended that this is disabled as well, as there is a known vulnerability that can allow an attacker to obtain the WPA passcode by bruteforcing the WPS PIN.

WPA2 Security

 

3. Use Complex Passwords

Even though this is probably fairly obvious, having a complex password is just about the most important component to securing your mobile hotspot. A common myth is that since your hotspot is only on when you need access, its not likely that an attacker will guess your passcode in the allotted time frame. However, with the way WPA works an attacker only needs enough time to capture the handshake, then they can attempt to crack the password offline. There are many different good (and bad) methods to coming up with complex passwords. However given that the nature of a mobile hotspot is to turn it on only when needed, my recommendation is to strive for a passcode that’s complex enough, as well as change the passcode every time you use your hotspot. The goal is that your passcodes are complex enough that an attacker cannot reasonably crack it in the allotted time, and even if they wanted to crack it offline, the cracked passcode would be useless because it will change the next time you use it. The challenge then becomes in coming up with a complex enough password every time you go to use your hotspot.  In cases like this, I am a big fan of the strategy put together over at XKCD.com, which is to take four random common words and put them together to form the password. For example, orange + finger + core + sleepy. This makes passwords easy to come up with, as well as remember, while providing enough entropy that it an attacker couldn’t reasonably crack it.

4. Turn It Off When Done

This one may seem a bit obvious as well, but since I’m even guilty for this one, I thought it was worthy enough for its own section. Turning it off when you aren’t using it is not only easier on your data usage (and monthly charges), but lowers the chance a potential attacker might have to attempt hijacking access to your hotspot as well. Now if you’re as absent minded as I am, many hotspots come with an inactivity timeout option, which will automatically shut it off after X minutes of inactivity.

Inactivity Timeout

5. Avoid The Defaults

This is more of a blanket statement that touches the other two areas as well, but most of the default settings for setting up your mobile hotspot should be avoided, with WPA2 Security being just about the only exception, it’s good to avoid any of the default settings in general. The default passcode being changed is the most important to note, even if the passcode meets  the complexity requirements in step 3, as it’s possible this passcode is re-used and can be a part of an attacker’s dictionary attempts.

Complete Settings

That’s it! Follow these tips as a best practice and enjoy the freedom for high-speed Internet anywhere you go (or at least anywhere you have 4G). If you have any questions or comments, feel free to connect with me through Twitter @tehcolbysean.