Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Anchor – developer-friendly private CAs for internal TLS (anchor.dev)
76 points by benburkert on Nov 1, 2023 | hide | past | favorite | 43 comments
Hi HN! I'm Ben, co-founder of Anchor (https://anchor.dev/). Anchor is a hosted service for ACME powered internal X.509 CAs. We recently launched our features & tooling for local development. The goal is to make it easy and toil-free to develop locally with HTTPS, and also provide dev/prod parity for TLS/HTTPS encryption.

You can add Anchor to your development workflow in minutes. Here's how:

- https://blog.anchor.dev/getting-started-with-anchor-for-loca...

- https://blog.anchor.dev/service-to-service-tls-in-developmen...

We started Anchor because private CAs were a constant source of frustration throughout our careers. Avoiding them makes it all the more painful when you're finally forced to use one. The release of ACME and Let's Encrypt was a big step forward in certificate provisioning, but the improvements have been almost entirely in the WebPKI and public CA space. Internal TLS is still as unpleasant & painful to use as it has been for the past 20 years. So we've built Anchor to be a developer-friendly way to setup internal TLS that fully leverages the benefits of ACME:

- no encryption experience or X.509 knowledge required

- automatically generated system and language packages to manage client trust stores

- ACME (RFC 8555) compliant API, broad language/tooling support for cert provisioning

- fully hosted, no services or infra requirements

- works the same in all deployment environments, including development

If you're interested in more specific details and strategy, our blog posts cover all this and more: https://blog.anchor.dev/

We are asking for feedback on our features for local development, and would like to hear your thoughts & questions. Many thanks!



What advantages over say, smallstep/certificates, letsencrypt/boulder, django-ca, square/certstrap, or hashicorp/vault (and e.g. OpenWRT's luci-app-acme ACMEv2 GUI) does Anchor offer?

https://github.com/topics/acme

applications/luci-app-acme/htdocs/luci-static/resources/view/acme.js: https://github.com/openwrt/luci/blob/master/applications/luc...

https://openwrt.org/docs/guide-user/services/tls/acmesh

https://developer.hashicorp.com/vault/tutorials/secrets-mana... https://github.com/hashicorp/vault :

> Refer to Build Certificate Authority (CA) in Vault with an offline Root for an example of using a root CA external to Vault.


This is a managed SaaS solution, not self-hosted software like the ones listed. We're more akin to one of the certificate management products in cloud providers, but our target users are not security experts with prior PKI/X.509 deployment experience. We're building anchor for developers who want or need TLS/HTTPS, but don't want the headache & toil of manually setting up & running an internal CA and the extra infra that goes with it.


I'm not sure I understand the value prop. For localhost, you typically just generate a self-signed certificate that doesn't need to be trusted by everyone, as the dev can just add it to their local store. There are also other services that provide ACME certificates for localhost domains, basically what you do for free (I can't find a link to one but it was posted on HN recently).

If you need a trusted certificate for local dev, something like Cloudflare Tunnels is more valuable as you can have other folks access the service.


My project, getlocalcert.net[1] may be the one you're thinking of.

Since I'm also building in this space, I'll give my perspective. Local certificate generation is complicated. If you spend the time, you can figure it out, but it's begging for a simpler solution. You can use tools like mkcert[2] for anything that's local to your machine. However, if you're already using ACME in production, maybe you'd prefer to use ACME locally? I think that's what Anchor offers, a unified approach.

There's a couple references in the Anchor blog about solving the distribution problem by building better tooling[3]. I'm eager to learn more, that's a tough nut to crack. My theory for getlocalcert is that the distribution problem is too difficult (for me) to solve, so I layer the tool on top of Let's Encrypt certificates instead. The end result for both tools is a trusted TLS certificate issued via ACME automation.

1. https://news.ycombinator.com/item?id=36674224

2. https://github.com/FiloSottile/mkcert

3. https://blog.anchor.dev/the-acme-gap-introducing-anchor-part...


What is complicated about local certificate generation?

You can generate one using openssl and then import it to cert store. That's a few minutes of work to create a script and to me seems much less complex than the alternatives of using some third party service. Reminds me of leftpad.


It depends a lot on what you are doing. The Caddy experience is great as it automates the whole process. But if you run Caddy in a docker container or a VM and connect from the host it's manual again. Maybe you've set up the Firefox trust store, but now Java apps can't connect. Your coworker wants to connect, and now your managing trust stores on other systems.

If you need to write a script, then it's already too complex.


Of course it will differ depending on scenario. Let's say I only want certs for local debugging and I use Java. I can create a script to generate such a cert and import in 5 minutes. Since it's for local debugging I don't care if cert is leaked so I can store it anywhere.

Which solution is less complex? I followed OPs link to Anchor but was prompted to sign on using anorher third-party service which is already more complex in terms of rolling out in a medium sized team.


“Your coworker wants to connect”

It’s no longer local development now is it? You are crossing into dev deployment land. Unless you are talking about remote pair programming and setting up a reverse proxy at home to allow your coworker to connect or open some port forwards on your router (in which case, the world can connect).

I’m in the camp that can write a 14 line Golang program that can generate a self-signed cert for whatever you want. Anything that needs a root CA is going to need LetsEncrypt or those god awful mafia-esque geotrust/verisign certs.

The certs should be installed on your OS, not your browser. No trust stores to manage. Firefox will use Windows Certificate store, Mac keychain, Linux /etc/ssl/certs.

For client certificates, you can generate a self-signed <dns-name-dejour> so long as your hosts file points the domain to your container IP or localhost.

It’s not that complicated but it’s not trivial if you don’t understand certificates. Having a SaaS service do this for you I think is overkill. certbot could definitely do this. I think web frameworks should do this as part of their new-project process. Considering the world requires https.


> It’s no longer local development now is it?

I think that's part of the challenge. Your not "doing local development" or "doing dev deployment". Your trying to solve an evolving set of problems. A local CA works for local, but becomes a pain if you ever need something different.


> My theory for getlocalcert is that the distribution problem is too difficult (for me) to solve, so I layer the tool on top of Let's Encrypt certificates instead. The end result for both tools is a trusted TLS certificate issued via ACME automation.

It's a really hard problem, and the root store programs do amazing work. The proof is that hardly anyone is even aware exist at all! I've also done the "use LE for internal TLS" setup, and it worked great until I hit API limits and everything came grinding to a halt. There's a few advantages to using Anchor as a drop in replacement for LE:

- we use an EAB token ACME workflow, so no need to set DNS records or expose infra to the internet, just push API tokens to containers and provision certs at container boot.

- EAB tokens are scoped to least privilege rules, so your staging tokens can't be used to provision production certs.

- Certs don't show up in public certificate transparency logs.


> However, if you're already using ACME in production, maybe you'd prefer to use ACME locally?

I just use Let’s Encrypt: https://gruchalski.com/posts/2021-06-04-letsencrypt-certific....

If I need a CA, I go for cfssl.

Fact, I don’t have ACME via my own CA but it’s not necessary anyway if one is using LE.


I echo this sentiment.

I'm not sure a managed service is interesting. The target audience probably mainly wants it fully self-hosted. From quickly skimming the frontpage, quickstart, and terminology it's not at all clear why I need to / want to make an account in order to run / use this service.


Hi, author here. I've also done the self-signed cert in dev thing a bunch of times, and never really feel like it provides solid dev/prod parity for TLS in staging & production. And most certificate management products don't work well in development. One of our goals is to make certificate provisioning the same for all environments (including development), so that you can be confident that encryption that works in local development will work the same in staging & production.


> For localhost, you typically just generate a self-signed certificate

Ah, yes, that [mythical] developer which entire company depends on and he has one self-signed certificate to fulfill all the needs.

Everyone else has many developers running many local and not-local development (and not only development) environments which can have a full access to Internet or be isolated.

> doesn't need to be trusted by everyone, as the dev can just add it to their local store

And this is how the certificate warnings starts to be dismissed without reading and this is how the local self-signed certs find a ways to the local stores of the every computer device in the company.


Yes indeed. Those who think pki is easy only understand that generating certs is easy, and don’t realize that managing trust of your certs is the difficult part. If you are willing to do a bunch of self-signed certs that aren’t trusted by all the right parties, you’re missing half the point of pki.


The core usecase might be to have anchor and a cert-manager in k8s connected to it and then be able to generate valid certificates for non-public services. Also they would use solely private DNS.


You can create a self-signed CA in cert-manager directly already, which has the advantage that the private key never leaves your infrastructure, you don't need to create a login account on some external service to do it, it will work fine behind an airgap, and you can use your existing DNS domain instead of having to use Anchor's "lcl.host" which seemingly requires all of your queries to resolve "private" URLs now have to go to public DNS servers.


Can you elaborate on this? We have some 300 internal APIs on a valid domain. We used to use let’s encrypt, but got rate limited for obvious and fair reasons when we were migrating between clusters. It’s a bit better with zerossl, but we still get 429s when cert-manager is issuing a ton of certs at the same time.


Just wanted to clarify that `lcl.host` is a service that only helps with local development, it's not useful (and shouldn't be used) in staging & production environments. For staging & production, we let customers use a public domain they own, or a special use domain (`.local`, `.test`, `.lan` etc).

Here's how the architecture you described works with Anchor: assuming your domain is `mycorp.it`, you can add it to your organization. Then create staging & production environments. This provisions a stand-alone CA per environment, and the CA is name constrained for the environment (e.g. only `*.stg.mycorp.it` in staging). Each of the 300 APIs can be registered as a service: this provisions an intermediate CA per environment that is further name constrained (e.g. `foo-api.stg.mycorp.it` in staging). For each service in each environment you generate a set of API tokens (EAB tokens in ACME parlance) that allows your automation to provision server certs with the ACME client of your choice. edit: in your case, cert-manager would be the acme client delegating to Anchor.


Yes, can certainly delegate cert-manager to a CA in Anchor, which gives you a nice view into the cert material in use in your environment. And the client package support automates the toil of updating all your apps or images trusted root CA certs.


I don't really understand how "hosted" and "internal" go together here - does this mean that a) the devices I need certificates for must connect to your servers, and that b) your servers could theoretically sign certificates for devices which do not exist? If so, especially for the latter point, this IMO isn't really useful for any real-world application, as the most important things of a CA is control.


Yes, this is both "hosted" and "internal": we build & manage a CAs per org. It's a bit like having an instance of Let's Encrypt, but just for your org (or per environment). Your clients will only trust the certs for your CA, and those CAs have constraints in place so that we could never issue a certificate outside of your set of configured DNS names. For example, even if a certificate was issued for gmail.com, it wouldn't be trusted by your clients.

We always build two-tier PKIs, which means your server certificates are issued by intermediate certificates, and those intermediates are issued by a root certificate. In the future, we will let users bring their own root certificate so that we never see your root key material, which you can keep safely in an HSM or KMS.


> Your clients will only trust the certs for your CA, and those CAs have constraints in place so that we could never issue a certificate outside of your set of configured DNS names.

Does this work in practice? I was under the impression that the extensions for restricting which domains a CA can use weren’t widely supported.


It does work, and we've found it to be about as well supported as SAN names, which is pretty extensive these days. It's just not commonly used by public CAs because the real value of these public CAs is that they can issue for any valid domain name, not a predefined set.


Does this help at all with the issue of deploying the root CA cert on every device that will interact with services with these certs deployed? That seems to me to be the hardest part about running an internal CA. You've got to put it on everyones laptops as well as all your servers.


Indeed, we automatically build language (JS, Go, Ruby, Python soon) and OS (debian) packages that you can use in your application or base image. Those packages bundle the set of root CA certs so that your clients trust the certificates presented by servers. Soon we'll have automatic package publishing, so that rotating cert material is just another dependabot PR.

edit: for the laptop problem, we have a CLI toolchain that gets your development environment setup by adding all the necessary CA certs to your local trust store. More about that here: https://blog.anchor.dev/getting-started-with-anchor-for-loca...


Considering the amount of time I’ve spent dealing with trusting CA’s in different environments (and worse, seen people just disable cert verification) I think the real value proposition is probably in the client tooling.

Any org that care enough to have an internal PKI (compared to just using e.g public certs for internal dns names or wildcard certs) probably don’t hosting something internally.

But if the pricing is reasonable and help the client situation enough, then I see it could maybe be worth it?


One thing I'd say on the client side would be an integration with MDM vendors somehow. All the platforms have native hooks to install certs into their trusts that the MDMs use, and corporate IT is already using MDMs. I think that would actually be a lot easier than telling everyone "go download this CLI", especially if a non engineer needs to access these internal services.


How is this better than using the internal tls setting for caddy?


The internal TLS stuff built into Caddy is great, as is it's support for ACME. And using Anchor with Caddy has few extra advantages. We generate system & language packages for your clients so they trust the server cert. The dashboard provides a view into all the cert material in your environment. And we have built in maintenance schedules for rotating certificate material. And we constrain the CAs to minimize the risk of key leaks: https://blog.anchor.dev/blast-radius-certificate-constraints...


Have you done any research about how well different web clients support name constraints? I know that Chrome only recently started respecting Name Constraint on root CAs [1]. The BetterTLS project tracks a bunch of related concerns, but oddly missed this one [2]. I'm wary of this approach since I don't know if the various software I use will enforce it.

1. https://alexsci.com/blog/name-non-constraint/

2. https://github.com/Netflix/bettertls/issues/19


We did do some research a few months ago, and I don't remember flagging this Chrome issue. It could either be because we add the name constraints to the intermediate CA certs (we always setup a two-tier PKI), or because our tooling adds the certs to the system trust store (same as mkcert, which IIRC also adds name constraints), not importing them directly into the browser. Other than some issues with Rust's webpki crate (which they have since fixed), I don't recall any client compatibility issues with name constraints. Support was added to OpenSSL around the same time that SNI names were added, so we think of them as roughly the same level of support, which is pretty good in 2023.

I don't have a solid answer, but my hunch for why BetterTLS doesn't place much emphasis on Name Constraints is because they have very limited use in public CAs. The latest cacert.pem bundle from curl only shows 141 certs with name constraints: `curl -s https://curl.se/ca/cacert.pem | certigo dump --json --format PEM | jq '.certificates[] | .name_constraints' | wc -l`


"...because they have very limited use in public CAs" Not really. It was/is mostly because NCs aren't 'widely' supported, even now. Name Constraints (referred to as 'Technical Constraints') allows - currently - a public CA to issue a CA certificate with NCs to a third party who then wouldn't require the full panel of WebTrust audits. It's very rarely used, and the one I dealt with eventually got wound down for myriad reasons including how tricky it really is to run a public CA, even a constrained one.

Some solutions (ADCS) obeyed name constraints when signing, but that doesn't help much.

Also - checking the cacert bundle isn't really a good test - that's for roots and you'll not find name constraints there. You'd better look at the thousands of issuing CAs (but the number is still tiny).


Do you offer wildcard certs for subdomains (i.e. *.news.ycombinator.com)? I believe I had some trouble with caddy's tls internal directive when trying to do something crazy like this. Maybe you could mention it as your differentiator too.

EDIT: I currently use mkcert with caddy and it works fine for this.


Yes, we do support wildcard certs (and will support IP certs in the future). But we don't let you provision certs for domains that you don't own.


Any reason why? That could limit the usefulness of the solution, I'd think. Do you allow issuance of not-hosted-by-anchor CAs for TLS inspection, for example?


We just see a whole lot of downside and no upside, what reason would someone have other than spoofing a third party domain?

I don't think I understand your second question, are you asking about cross-signing?


(Full disclosure, 20+ year veteran and CTO of big-CA-you-probably-know, but I really like how your product looks - just need a bit more time to explore!)

People do weird things with private CAs. Be it for testing or corporate shenanigans, they do want to issue for domains they don't control, data they can't verify, internal identifiers etc. - all in contrast to public/webPKI. This is fine, there's no real downside to letting them do it as it's a private CA after all.

The other thing wasn't cross-signing (not gazing into that abyss!) I just meant issuing a CA from the private root with CA=true so that it can itself issue certificates. Commonly used on MITM proxies/TLS inspection devices - sadly more common than you think, but again no risk to anyone outside of that enterprise. I believe even in some business areas, it's basically required to TLS MITM your users (finance).

Happy to chat more off here if you'd like - email on profile for personal or nick (at) sectigo dot com.


Feedback:

I don’t use services that require external services to auth with. I’ve stopped using GitHub whenever/wherever possible because of the ICE concentration camps thing and your service doesn’t allow me to log in or create an account without using GitHub.

Your website doesn’t say how much it costs.


Pricing is missing.


We don't have a paid offering yet. Right now we're focused on local development environments, which is free to use as individuals and organizations. In the future, we'll have a paid offering for organizations to use in their staging/production environments. Anyone interested in being a part of that pilot, please email me: my-username at anchor.dev


Your footer 'Get Started' link is broken.


oops! Sorry about that, should have a fix for that soon. In the mean time, here's the correct url: https://docs.anchor.dev/getting-started/quick-start




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: