There are so many problems with this article and the previous one it references (How weak passwords and other failings led to catastrophic breach of Ascension).
Specifically, RC4 is a stream cipher. Yet, much of the discussion is around the weakness of NTLM, and NTLM password hashes which use MD4, a hash algorithm. The discussion around offline cracking of NTLM hashes being very fast is correct.
More importantly though, the weakness of NTLM comes from a design of the protocol, not a weakness with MD4. Yes MD4 is weak, but the flaws in NTLM don't stem specifically from MD4.
Dan Goodin's reporting is usually of high quality but he didn't understand the cryptography or the protocols here, and clearly the people he spoke to didn't help him to understand.
EDIT: let me be more clear here. MS is removing RC4 from Kerberos, which is a good thing. But the article seems to confuse various NTLM authentication weaknesses and past hacks with RC4 in Kerberos.
Obviously RC4 itself isn't the problem. The problem is that Microsoft ships a "ciphersuite" that includes a bad password-based key derivation algorithm that also happens to be tied to a whole pile of bad cryptography. And the real, real problem is that Microsoft still ships a design in which low-entropy passwords can be misconfigured for use in encrypting credentials, which is a nightmare out of the 1990s and should have been completely disallowed in 2010.
While the NSA would, absolutely, use it to elevate existing internal access - it is such low-hanging fruit that they have enough alternative tools in their arsenal that it isn't a particularly big loss. Most of their competent adversaries disabled it years ago (as has been best-practice since 2010~).
More likely, it is Microsoft's obsession with backwards compatibility. Which while a great philosophy in general has given them a black eye several times before vis-a-vis security posture.
Most importantly, the NSA is not just about spying, it is also about protection.
A weakness anyone can exploit in software Americans use is not a good thing for the NSA. If they were to introduce weaknesses, they want to make sure only they can exploit them. For instance in the famous dual_ec_drbg case where the NSA is suspected to have introduced a backdoor, the exploit depends on a secret key. This is not the case here.
On the other hand if Snowden has shown us anything, it is that the NSA is more stupid than it looks.
Those stupid MFD machines have been the bane of my existence as a sysadmin ever since I started in this career many, many years ago.
It's these machines, plus a few really old windows-only apps deep in basement of enterprises that keep this old tech around. There's usually no budget to remedy, and no appetite to either from leadership
Its also what happens when the people buying the tech are disconnected from the ones implementing. Microsoft caters to this.
Just photocopy some currency. Depending on the machine, it has a good chance of bricking the machine with an obscure error code until a service tech comes out, at which point you can point out this machine is really old and why don't we get a new one.
If you'd rather not commit attempted forgery, just print out some Wikipedia pages about the EURion constellation, which is what they detect in money.
Do manufacturers also have personal responsibility for making safe products, or does it fall to consumers to become experts in the myriad different fields necessary to asses the safety of every product they buy?
Given the time it's been since deprecated, I'm assuming most older versions of Windows since 2000 and Samba have long since supported more secure options... though from some comments even the more secure options are relatively weak by today's standards as well.
Aside: still hate working in orgs where you have a password reset multiple times a year... I tend to use some relatively long passphrases, if not the strongest possible... (ex: "ThisHasMyNewPassphrase%#23") I just need to be able to remember it through the first weekend each time I change without forgetting the phrase I used.
> Verifiers and CSPs SHALL NOT require subscribers to change passwords periodically. However, verifiers SHALL force a change if there is evidence that the authenticator has been compromised.
One of the nice cases where it can be helpful that standards themselves, which you can point to, have said to stop doing that.
Yeah, I've gotten headway in this in other places I've worked... heavy advocate for the only requirement being a minimum length with the recommendation to use a "phrase" as well as not requiring rotation in terms of less than a year at a time if at all... though not strictly matching NIST, some ops find a never require change hard to swallow.
I wrote an authentication platform used by a few govt agencies. The irony is all my defaults match NIST guidelines (including haveibeenpwned lookup on password set/change), but needed to support the typical overrides for other hard requirements that tend to come from such agencies in practice.
>> Verifiers and CSPs SHALL NOT require subscribers to change passwords periodically. However, verifiers SHALL force a change if there is evidence that the authenticator has been compromised.
> though not strictly matching NIST, some ops find a never require change hard to swallow.
I think they're right about that. A scheduled change just represents the accumulating probability that there's been a compromise somewhere that didn't come to your attention.
It seems like it would make more sense for a scheduled change to affect all passwords at once, though.
There has to be some balance though, as requiring change too frequently encourages the use of insecure but easy to remember passwords, or password that are very similar to the previous one thus failing the purpose of the change (e.g. a password containing the year, and the employee only changes the year every time). Best would be pushing for the use of a password manager or auth tokens like Yubikeys.
On changes, as I've mentioned in other threads, I don't think once a year is too bad... also, I'm an advocate of SSO as much as possible with a strong MFA (ideally push selection) option for work orgs. It reduces friction and can actually improve overall security if appropriately managed... that said, building internal apps that have appropriate application access is often harder still in these environments.
I got to work one morning recently, got a message that our MDM required me to change my password, logged into the MDM, turned off that obsolete option, and announced to the company via Slack that we're not doing that anymore.
Every now and again I ponder if I'm happy where I'm at careerwise, and then something like this reminds me that I have the authority to make these decisions, and I decide that yeah, I like being me.
Unfortunately, not all guidelines have caught up. PCI-DSS still requires password changes every 90 days for anything in scope (the cardholder data environment, anything that might even remotely touch payment card data).
> point the right compliance person to the latest NIST guidelines
This only works if that's the only standard they're adhering to. At my employer, the password changes are mandated by their "cyber insurance" policy which hasn't caught up with the times.
I had to help a friend with a small business through a "cyber insurance" policy compliance questionnaire once and it was an incredible exercise in frustration. I would have felt certain the policy was a scam if it wasn't being mandated by a real insurance company the small business did other policies through. The questions didn't seem to have been written by someone with technical experience. Many of the questions wanted Yes/No binary answers for things that are complex technically. Some of the questions led to good discussions about security improvements for a small business, but most of them seemed "cargo culted" from large enterprises without real application to a small business.
That did not leave me a strong impression of "cyber insurance policies". I suppose it also did leave me wondering how much we've left very small businesses behind in security as a culture.
Fine until you run into the filter that prevents the new password from having any of the same substrings longer than some limit compared to the old one.
IMHO there are two requirements for a good password:
1. It must be hard for a computer to guess.
2. It must be easy for a human to remember. If you can not set a secure password and then remember it a week later it is a bad password.
This is why I really hate overly strict password requirements that make it hard to remember. These cause people to write it down or do things that appease the password checker but don't make it harder to guess.
That replaces number two and is the correct alternative in most cases.
There are cases where a password manager may not solve the problem, though. It doesn't help if I forget my disk encryption or work AD password and I need to be able to login before I can get to the password manager in the first place. Enterprise IT is also where you find some of those frustrating password policies, such as long and complex passwords with mandated changes every month or two, and where you usually can't choose your management tools.
Of course those particular passwords usually get typed so often that remembering them isn't much of a problem. And password managers work well for pretty much all secrets that aren't needed that often.
Yeah. I've been in the habit of keeping the (encrypted) password file in multiple places. So I can even get the password off my phone if I really need to.
If it's the IT managed computer login then you couldn't use a password manager for it, right?
I think this is more the realm of using windows hello or apple touchid (AFAIK no good, simple, standard built-in way exists for linux distros) to get the first OS login and then you can use your password manager when you are logged into the OS.
What method/program are you talking about? Does it support FDE? Is it reasonably supported with the methods expected by end users (fingerprint, face, smartcard, etc.)?
Everytime I've tried its been finicky and had to use non-standard tools to get it working.
I'm a different commenter but yeah, solutions exist. For example systemd-cryptenroll let's you use a FIDO token (or TPM or PKCS#11 smartcard) to unlock your encrypted disk and it's very easy to set up. Quite literally a single command.
Windows Hello serves the same purpose for Windows, though I'm sure there are caveats/differences.
If it's a fido hardware token you still need to make sure you have a backup token. It's a lot simpler on windows/macos where you can use biometrics for the same purpose.
I tend to never use my password manager for my primary OS logins for desktops/laptops I physically access. Fortunately, I rarely have to keep more than 5 or so memorized at a time (including my password manager, Bitwarden/Vaultwarden).
How did RC4 become so widespread when it came from a leak? Additionally, why was it the de facto standard stream cipher in the 90s, even though it was known to be flawed? Just the speed?
In addition to the other sibling comments, I think there's also a factor of greatly increased computing power. Back in the 90s and earlier, we just didn't have the computing power generally to encrypt everything with super-strong algorithms in realtime. This probably also affects who can practically do development work on state-of-the-art algorithms.
I recall, when it was originally created, SSL was a rarity, a thing only for the your bank account and the payment page for online stores, because nobody could afford the CPU load to encrypt everything all the time. Now, it's no big deal to put streaming video behind TLS just to ensure your ISP can't mess with it.
RSA was still selling RC4 into the mid-2000s as a product. While open source variants of RC4, often trying to avoid the RSA trademark by calling it things like ARCFOUR, started trading in the 1990s, there was still a sense that RC4 was backed by a security company.
Also, even though flaws were discovered as early as the open source variants had reverse engineered the RC4 algorithm, it was one of those "flaws exist but need things to exploit them that are out of our current threat models" problems, with it being a multi-stage, multi-year effort from the earliest flaw discoveries in the 90s to the most devastating exploits being developed around 2013-2015 taking advantage of those flaws in reproducible ways.
I also remember in the 90s it felt like the reverse engineered, open source efforts were once shining beacons of hope like PGP of releasing "enterprise grade" security algorithms from trade secret-protected corporate and governmental interests to "the common people". RC4 was simple to implement and easy to reason about, but gave "good enough" security for a lot of uses, certainly far better than "no security unless you pay a company like RSA and only if you don't plan to export your software outside of the US". That's why RC4 was the basis of a 90s idea called CipherSaber [1] about the idea of being able to implement your own security suite that you controlled and companies couldn't take from you.
Of course, things have shifted so much since the 90s when security suites were trade-protected and export-controlled. The security through obscurity of the algorithms involved behind trade secrets laws is no longer seen as an advantage and the algorithm being public knowledge has started to be a part of security suite threat models. Today's advice is never write your own security suite because there are several well regarded open source suites that have many eyes on them (and subsequently vulnerability plans/mitigations). Governments in the internet age have had to greatly relax their import/export controls on cryptography. We live in a very different world from the world RC4 was originally intended to secure.
It's fast, easy to implement, has very concise code, takes any key length up to 256 bytes, comes from a famous cryptographer, and there weren't a lot of alternatives.
Because "everybody uses RC4" (the sibling comment from dchest is correct). There was a lot of bad cryptography in that period and not a lot of desire to improve. The cleanup only really started in 2010 or thereabouts. For RC4 specifically, its was this research paper: https://www.usenix.org/system/files/conference/usenixsecurit... released in 2013.
I think this is a really good question, for what it's worth. Best I can come up with is that, at the time, our block cipher blocks were mostly 8 bytes wide, which doesn't leave a lot of headroom for CTR.
Reasonable! Anyone who cares about AD security has been AES-only for at least a year now, and most likely much longer, and it's not like these mitigations are especially hard, unless you're still running some seriously obsolete software.
Nope. AES is not trivial to implement securely, so most implementations simply rely on hardware support. ChaCha20 and XChaCha20 are more secure ciphers.
To be clear NetInfo is not an alternative. It's just not generic enough and not really a good fit for Windows. NetInfo is too much a Unix solution, so there's no cross-realm/domain "forest" functionality, no support for SIDs, etc.
AD is perfectly fine. It's actually really good at what it is: a highly-available Kerberos implementation with an integrated directory server. It's not as dominant as it used to be because there are better ways to handle identity for web applications and zero-trust environments, but I don't think that diminishes what AD was good at.
> AD has built-in mecanisms where a random person can execute code on the AD themselves
Could you provide an example? I'm sure I know what you're talking about, but the way you put it I'm having a hard time figuring out what you mean.
> Most people are not perfect; Hence, most people have security issue with AD (see the never ending tail of cryptolocked companies)
Yeah, but, how many of those ransomware attacks exploit misconfigured AD environments rather than something more banal like harvesting credentials accidentally checked into Git, or spear phishing for a target? Identity, in general, is hard.
AD allows connections between two computers that are registered against the active directory, including a random laptop and the AD themselves
This is a fundamental difference versus something like oauth: in the former, everything is done to allow RCE on the AD: the code exist; in the later, everything is done to prevent RCE on the issuer;
Identity is hard ? Identity is a lot simpler once you assume that:
- people make mistakes
- code is buggy
- infrastructure has issue
This is why using things like oauth instead of AD's authentication mecanism is good: because it is secured by default and you must try really hard to allow a wide range of attack
In the windows world, you connect to a server using RDP. I thought this would be implied. RDP is a mean to connect to a remote host and, from there, execute code. Hence, code execution.
What on earth are you talking about? RDP and AD are pretty much orthogonal to each other. You can use an AD account to connect to a domain-joined remote server over RDP, but at that point you're just... logging into a machine, same as any other remote protocol. You prevent bad actors from doing this by not giving them permissions to log in to that server. To call this "code execution" is really odd. Remote code execution as a vulnerability almost always refers to an unintentional behavior in software that allows an attacker to execute arbitrary code as part of that process. Referring to a user logging into a machine with the appropriate permissions and running software as "code execution" is not typical, and is not a vulnerability in any normal sense of the term.
Because logging to a remote server is not "executing code in that remote server" .. ?
Same as any other remote protocol ? Yes. But we are not talking about that, we are talking about active directory, whose main purpose is to authenticate and authorize stuff. Yes, you can configure everything. But just like a wall is better than a door with a lock .. see what I'm saying ? In the AD world, allowing remote code execution is not a bug, it's a feature. Call it a vulnerability if you want;
A direct competitor of AD is oauth, which does not allow people to execute code on the issuer
Number of cryptolock due to oauth: none (that I know of); As if theory and practice sometimes meet ..
I understand that you like AD, and that's fine. The original post was about security and I stand by my point: thinking that we are perfect, that others are doing mistakes but "not us" is not good for security. Neither is playing with fire, as per the vast quantity of burnt people
> In the AD world, allowing remote code execution is not a bug, it's a feature.
This is the assertion that I think you have failed to prove. RDP and WinRM are just remote access protocols, like SSH or what have you. AD doesn't have to be involved in their use, so I'm not sure how "RDP allows you to log into a server remotely" is AD's problem. Or even a problem at all, since that's what its meant to do.
> A direct competitor of AD is oauth,
It really isn't. OAuth is for authorizing third parties access to client resources, not for authentication. By the time you're getting access tokens with OAuth, you've already authenticated with your identity provider. Perhaps you're referring to OpenID Connect, which is built on OAuth 2.0? In any case, AD and OAuth/OIDC don't really compete with each other. AD is intended to be used on internal enterprise networks to simplify authentication and authorization across a fleet of machines, and OAuth/OIDC have a much more pronounced focus on web.
> which does not allow people to execute code on the issuer
I'm not sure what this means. When you say issuer, are you referring to the auth server that issues ID tokens? What if I'm hosting my IDP in AWS and use an OIDC integration to access my AWS admin console and remotely log-in to my IDP server? Am I not then using it to execute code on my auth server?
"This is the assertion that I think .." - you are showing bad faith;
"OAuth is for authorizing third parties access to client resources, not for authentication" - just like AD, oauth is used for authentication and authorization; See the fields sub, scope, audience etc;
"OAuth/OIDC have a much more pronounced focus on web" - of course, we do not use "web" inside internal enterprise networks;
"When you say issuer" - issuer is a keyword, not a random word; But again: you know it;
"Am I not then using it to execute code on my auth server" : can you execute any kind of code on AWS' IAM servers (any server will do) ? Please share some details;
> just like AD, oauth is used for authentication and authorization
In a sort of roundabout way, but in those cases what the relying party is accessing are the user's identifying details.
> of course, we do not use "web" inside internal enterprise networks
That's not really what I mean. I would never expose an AD domain to the internet, that's not what it's for.
> can you execute any kind of code on AWS' IAM servers
That's not what I was saying, I was saying it in the context of a self-hosted identity provider. If all you've meant by this entire exchange is that OAuth means you don't have to worry about security because you've outsourced it to someone else, then I've really wasted my time.
Never underestimate the propensity for lazy windoze admins everywhere to ride with defaults for decades. They could fix it, but they typically don't know any better.
There's still medical, hospitality, government, and industrial that probably runs off a nt4/2000 dc's somewhere, or at least xp era things they've said to kill but see above. Microsoft technically supported xp until 2019 in "iot" versions probably mostly for oracle pos systems that would never die after they acquired micros 20 years ago, probably still in your fav restaurant until around then.
The joys of a windoze world, thanks microsoft for the advent of the lazy admin.
Etype 23 (rc4-hmac) gets ~3500 kH/s, 18 (aes256-cts-hmac-sha1-96) gets roughly 2500 kH/s. Big difference, but somehow I thought it would be much bigger? 2.5M guesses/second is still not so bad.
I've done kerberoasting and aseproasting a handful of times only, but from what I recall, RC4 can be cracked within reasonable time regardless of your password complexity. But with AES if you have a long and complex service account password, it will take decades/centuries to crack. But (!!) it is still quite common to use relatively weak passwords for service accounts, a lot of times the purpose of the service is included in the password so it makes guessing a bit easier.
My criticism is that Kerberos (as far as I'm aware) does not provide modern PBKDFs (keyed argon2?) that have memory-hardness in place. That might be asking too much, so why doesn't Microsoft alert directory administrators (and security teams) when someone is dumping tickets for kerberoasting by default? It's not common for any user or service to request for tickets for literally all your service accounts. Lastly, Microsoft has azure-keyvault in the cloud, but they're so focused on cloud, they don't have an on-prem keyvault solution. If a service account is compromised, you still have to find everything that uses it and change the password one by one. Where if there was a keyvault-like setup, you could more easily change passwords without causing outages.
Rotating the KDC/krbtgt credential is also still a nightmare.
From what bits I've heard, Microsoft expects its users to be using EntraId instead of on-prem domains (computers joined directly to entra-id instead of domain controllers). That's a nice dream, but in reality 20 years from know there will still be domain controllers on enterprise networks.
Kerberos has FAST for truly addressing the offline dictionary attack issues with PA-ENC-TIMESTAMP. FAST is basically tunneling, encrypting using some other ticket. With PKINIT w/ anonymous client's it's pretty easy to get this to be good enough, but Windows / AD doesn't support that, so instead you have to use a computer account to get the outer FAST tunnel's ticket, which works if you're joined to the domain, and doesn't work otherwise.
There's also work on a PAKE (zero-knowledge password proof protocol) which also solves the problem. Unfortunately the folks who worked on that did not also add an asymmetric PAKE, so the KDC still stores password equivalents :(
> Rotating the KDC/krbtgt credential is also still a nightmare.
I've done a bunch of work in Heimdal to make key rotation not a nightmare. But yeah, AD needs to copy that. I think the RedHat FreeIPA people are working on similar ideas.
> That's a nice dream, but in reality 20 years from know there will still be domain controllers on enterprise networks.
SSPI and Kerberos are super entrenched in the Windows architecture. IMO MSFT should build an SSP that uses JWTs over TLS, using PKI for server auth and JWT for client auth, using Kerberos principal names as claims in the JWTs and using the PKINIT SAN in server certs to keep all the naming backwards compatible. To get at the "PAC" they should just have servers turn around and ask a nearby DC via NETLOGON.
Do you now if FAST and the work on PAKE is available for use in AD?
Heimdal looks very cool, I'm reading up on it to learn about it a bit more. Also, nice work on the SEO! On ddg, searching for "Heimdal" gives your site as the #1 result, beating even wikipedia for the namesake.
Active Directory does support FAST. It also supports tunneling over HTTPS, which also buys protection for weak pre-authentication mechanisms.
Idk about AD and PAKE.
Heimdal is really cool, though currently a bit on the abandonware side, but I'm working on a huge PR that should lead to us doing an 8.0 release with lots of pent-up and very cool features.
What's most cool about Heimdal is the build-a-compiler-for-it ethic that its Swedish creators brought to it. That's why it has a very nice ASN.1 compiler. That's why it has three other internal compilers, one for com_err-style error definition files, one for certificate selection queries, and one for sub-commands and their command-line options.
> I've done kerberoasting and aseproasting a handful of times only, but from what I recall, RC4 can be cracked within reasonable time regardless of your password complexity
That's not quite right. If the password is sufficiently strong, you won't crack it even when RC4 is used. The password space is infinite.
You might be thinking of the LM hash, where you are guaranteed to find the password within minutes, because the password space is limited to 7 character passwords.
> Rotating the KDC/krbtgt credential is also still a nightmare.
I also disagree there. Just change it exactly once every two weeks or so. Just don't do it more than once within 10 hours. See: https://adsecurity.org/?p=4597
What I wonder is why Windows isn't changing it itself every 30 days or so, just like every computer account password.
> why doesn't Microsoft alert directory administrators (and security teams) when someone is dumping tickets for kerberoasting by default?
Good question. Probably because they want you to license some Defender product which does this.
> I also disagree there. Just change it exactly once every two weeks or so. Just don't do it more than once within 10 hours. See: https://adsecurity.org/?p=4597
That link says wait a week before the second change. There is a good reason for that, because kerberos is so assymetric and just because there are badly written apps out there, you'll cause failed logins for them if you do it too fast. Normally I consider this in the context of a domain compromise, so you have to consider making the rotation with a lower delay, but that always raises the controversy of causing outages. My original comment is exactly what you said, the rotation should be an automatic and regular event. It should be able to change it, track how much the old password is being used, and after the old password hasn't been used in <configured interval> it can do another rotation. It can prevent outages by tracking usage that way. I see no good reason why they made the effort to have an old/new password distinction but didn't give admins the option to auto-rotate. Although, I wonder if you can do this now with powershell (if the old pw usage is tracked anywhere).
> That's not quite right. If the password is sufficiently strong, you won't crack it even when RC4 is used. The password space is infinite.
You're totally right. I was thinking in terms of password people usually configure which are 12-18 characters long. But computer accounts and well configured service accounts, I've seen them use a 64 character minimum which should be very hard to crack with RC4.
Specifically, RC4 is a stream cipher. Yet, much of the discussion is around the weakness of NTLM, and NTLM password hashes which use MD4, a hash algorithm. The discussion around offline cracking of NTLM hashes being very fast is correct.
More importantly though, the weakness of NTLM comes from a design of the protocol, not a weakness with MD4. Yes MD4 is weak, but the flaws in NTLM don't stem specifically from MD4.
Dan Goodin's reporting is usually of high quality but he didn't understand the cryptography or the protocols here, and clearly the people he spoke to didn't help him to understand.
EDIT: let me be more clear here. MS is removing RC4 from Kerberos, which is a good thing. But the article seems to confuse various NTLM authentication weaknesses and past hacks with RC4 in Kerberos.
reply