Low Hanging Fruit: Credential Re-Use Vectors and Password Management

This post is going to switch back and forth a few times between two issues that plague enterprises as they grow: user credential re-use and administrative password management. Frequently, the bad credential hygiene of a small enterprise will stick around well into their transition to a mid-level enterprise. Most of these bad habits don’t survive into the large enterprise because they don’t scale well, but during these transitions, they can expose an organization to unnecessary risks. I’m hoping to outline some best practices that can scale well enough to mitigate these risks without burdening smaller teams with overwrought credential practices.

Segmentation: Understanding Your Privilege

Some of the biggest vectors for abuse tend to be administrative credentials. The temptation for a smaller organization to dole out permissions to trusted, knowledgeable users is very strong, and often it’s in the business’ interest do this in order to allow more delegation of responsibility. A staff member who knows enough to help handle password resets can augment a helpdesk in meaningful ways that keep business operational without bottlenecks. What I’m advocating here is separating normal user accounts from privileged accounts. Let’s say you have a username convention that means John Smith has the username smithj. That smithj account should represent the person in their normal business role. Their e-mail, messaging, and telecom services are all tied to smithj. If we wanted to allow John to help with tasks outside his primary role, and those tasks required escalated privileges, instead of granting smithj additional access, we would create a new administrative account for him to use, Xsmithj, or something similar. This additional account segments his normal, unprivileged usage from his additional, privileged work. If you keep these administrative accounts separated from normal users by not granting them access to a mailbox, keeping them out of address books, etc we can significantly reduce the chance of an attacker escalating access even if John’s normal account is compromised.

I would extend this practice to your entire IT staff, and then restrict remote access to servers to only those X accounts. When someone asks John to reset their password, in our example, he logs into a domain controller to do the work using his X account, does what needs to be done, and then closes the session. Even better, we could install remote admin tools on his workstation and have him switch between smithj and Xsmithj, so that he doesn’t even need to log into a remote server. Create groups based on roles within your organization, and only allow the permissions those groups need to do their work. What this accomplishes is that if you see an X account (or a Z account, W account, whatever convention you want to use) doing something, you know that it has expanded capabilities. This keeps your regular user accounts from being laden with unexpected access that could present an attacker with the opportunity to do bad stuff.

Finally, a quick word on service accounts. So many applications and equipment now offer AD integration (we’ll get back to that later) that it’s not uncommon for a first-time setup wizard to ask for domain controller addresses and directory credentials. The worst thing you can do here is to plug in your own administrative credential during setup. What typically happens is when a user leaves the organization, the services tied to their account break when you disable it. This sometimes means that an old staff member’s account is still active, with the same password, years after they’ve left. In an ideal world, you’d determine the exact privileges needed and provision each account accordingly. In reality, segmenting services from users is so much more valuable than restricting the specific permissions required. This may be shocking, but I would rather have twenty service accounts in the domain admins group than twenty services using a single domain admin user account, as long as they were all using unique passwords.

Password Requirements: Stop Setting Your Users Up To Fail You

I have some opinions about password requirements, mostly related to my understanding of human memory and recall. Some of my best success in getting users to adopt better passwords is by changing password requirements for reduced symbol complexity and increased length. Then I provide them access to a tool that generates strong, memorable passwords. To this end, I am a big fan of dice ware style passwords. Much has been written on this topic, but general idea is to increase password length using random words and reduce complexity requirements. This improves the overall complexity of a password and creates more memorable passwords. Typically I’ll deploy a small web service or direct users to an existing one to generate these passwords. The result is, in my experience, better passwords, fewer tickets, and less credential re-use. If you make it easier to make a new, unique password, it reduces the chances they will just type in the same old password for everything.

A friend of my wrote a little piece of software that pulls numbers from a quantum random number generator on the internet to produce these kinds of passwords. I’ve used it to build a small web page that generates several of these types of passwords for my users, and they just pick the one they like. I love it because of the nerd factor inherent in generating random numbers from quantum effects. Here is a link to the software, in case you’d like to use it:

https://github.com/roryk/quantum-diceware

Additionally, the EFF has published a great article on the topic of password generation along with much better wordlists:

https://www.eff.org/deeplinks/2016/07/new-wordlists-random-passphrases

Directory Integration: Making it Someone Else’s Problem

Switch back to network administration credentials, as mentioned above, so many applications and management interfaces that using your directory to segment can provide a lot of functional controls, all with the same central pivot points. Good discipline in segmenting administrative access from users and services means that when you do have a password become compromised, the exposure is limited and corrective action can be taken without impacting multiple applications. These are all really good things and help mature your security practices quickly, and scale with your business’ growth.

What I do want to concede here is that we’re fundamentally shifting work off your security and network team by adding work to your system administrators. Frequently, federating authentication to cloud services requires deploying some kind of SAML service, frequently ADFS, and a lot of network equipment authenticates via RADIUS or TACACS+ only, meaning you need to administer additional systems, all of which end up being high impact disruptions when they fall over. In my time, I have deployed ADFS, Aruba Clearpass, and Ruckus Cloudpath, all of which are great applications, with varying degrees of complexity and cost. For my part, I feel that Cloudpath is a tremendous value with somewhat reduced customizability. Clearpass is tremendously powerful, at an increased cost depending on the features you’re interested in. ADFS requires no licensing, but it is not my favorite application to administer by far, and best practices for a mid-sized enterprise require four servers for a highly available deployment with good services isolation.

Gaps: When Good AAA Infrastructure Goes Bad

Let’s say you’ve taken everything thus far to heart, and have great segmentation within your directory and users are following best practices and behaving themselves. Then your vSphere deployment falls over because of a power outage or storage failure. Suddenly, your radius servers are no longer available to authenticate your login attempts to the network switch, which you think might be contributing to the outage. This all sounds like a really bad after hours call and the coffee shop you like doesn’t even open for another four hours.

The solution here is to maintain a disciplined local credential management process on top of your directory integration privilege segmentation. To do this, you need a password manager, specifically one that is available offline and accessible independent of your internal services. One option here is PasswordSafe, which implements ideas developed by Bruce Schneier. It’s free, there are many implementations, and I trust the cryptography involved. There are many other options, many of which I have no experience with. Personally, I use a service called 1password which allows me to synchronize data between all of my devices and access it offline, all with very strong cryptography.

My recommendation here, regardless of which password manager you’re using, is to establish a unique local administration credential for everything in your environment. As a compromise, you can re-use passwords for the same type of device – specifically, here I’m thinking of network switches. A common approach I have seen used is to use a common password for each model of a device, given that most organizations stagger upgrading equipment as it ages. This means you’ll have a rotating password as equipment ages out of your network. Almost all password managers rely on a master password to access the information they contain, and this is the password that you need to be most careful with. I would rotate it on an annual basis, and when I rotated the password, I would save an offline backup burned to CD with the master password held in a text file. I would keep this backup in a secure location for disaster recovery purposes. I’ll also commonly backup the password vault to my local system regularly just in case. Because the master password encrypts all of the other credentials, it can be reasonably assumed that this information is secured even if a system hosting it is compromised, so long as you are using very strong master passwords. There are definite caveats to all of this, I am mostly outlining some basic practices that should be broadly applicable for most of my customers, regardless of their size or available budget.

Credential Re-Use: Why Are We So Wound Up About This Anyways?

Coming back to the rest of our users, why is credential re-use such a hot topic anyways? The reason for this has been the proliferation of cloud-based services and the lack of control inherent in the business model they offer. It’s not uncommon for several critical aspects of a business to be taken out to the cloud. These offerings offer a lot of benefits, especially for smaller organizations without the staff or resources to run every application in-house. Unless you’ve federated your authentication out using some kind of a SAML service, it’s very likely the credentials your users provide to these services have no relationship at all with your internal policies. The temptation for a user to use the same password they’ve already memorized is very strong, and your ability to enforce them to change passwords will depend on the features of that service.

The catch here is that breaches happen, and these online services are a great target for attackers since they can frequently get a whole lot of data from multiple organizations. Overwhelmingly, the usernames for these services are an e-mail address, associated with the domain paying for the service, which leads to an account on your local domain. Even worse, there are a large number of online applications, forums, etc that do the same thing, even if your organization has no formal relationship with them. If you haven’t segmented user access, or even if you have, this opens up a number of vectors of attack. Typically, the criminals who breach an online service won’t be the ones attacking you, but instead they will offer up these credentials up for sale on the dark web. I strongly encourage all of my customers to look into services like LMNTRIX RECON to monitor these marketplaces for data related to their organization. It offers a relatively inexpensive way to understand your exposure on the Dark Web, and allows you to take action, even if your users are re-using credentials.

Finally, one of the more sophisticated mechanisms to protect directly against credential re-use is to invest in technology. A number of endpoint and firewall vendors provide features that can detect and prohibit internal credentials from being submitted to external websites. Overwhelmingly, these features are either offered on endpoint security agents, browser plugins, or rely on SSL decryption to gain visibility into your users’ activity on the web. Again, there are certainly caveats, but these technologies can also defend against spearphishing attacks where an attacker mimics an authentication page that your users use every day. These fakes are very convincing, and their preparation and deployment can be automated. I have seen very small regional higher education institutions targeted by these attacks, and the conclusion that I draw is that the time investment to build these fake login pages pays off frequently enough that they are profitable.

For smaller organizations, I recommend using targeted approaches to investing in these sorts of technologies, as well as other products that can specifically protect against spearphishing. By targeted, I mean that if you can’t invest organization-wide in a product without committing significant budget, you should at least get the best you can for the most critical users in your environment. Purchasing, HR, business units that handle critical or customer/student data, and all of your VIP’s present juicy targets for these criminals. Covering them, even if you can’t cover everyone, can be a good investment versus the potential cost of a high profile breach.

Closing Thoughts, Future Topics

I hope this first installment on the Aquila In Security blog, Low Hanging Fruit series has been useful, if not enjoyable. Our intent is to regularly update this content with other articles, product demonstrations, and technology evaluations. The Low Hanging Fruit series is intended to provide accessible information that smaller organizations can use to move up the security maturity ladder quickly with minimal investment in products, by detailing best practices that can translate to larger scale as an organization grows. Because Aquila is heavily invested in our customers’ growth, I felt it was important to address issues that even very small organizations can take advantage of to improve their security by addressing core business practices, rather than pitching products that, while they help, leave blindspots that are better resolved with solid internal processes.

Stephen Crim

Security Architect at Aquila
Stephen is a security solutions architect for Aquila, focused on state and higher education. He partners with new and niche OEM's which displace accepted industry norms. Through open integrations and standards, Stephen helps customers build environments customized for their teams and budgets. Before Aquila, Stephen administered data networks at the Chicago O’Hare and Midway airports. More recently, he was a network and data center administrator for a regional university in Southern New Mexico.
Stephen Crim