Sidenote: I really like the cookie consent form on this site. It's unobtrusive, clear, opt-out by default and the highlighted and only button is "Continue to site". And it even has a built-in GDPR request form! Bravo to https://www.clym.io/
Nice article, covers the basics well. Credential files seem like simplest way to go and are secure enough for most local uses. For anything more involved a secrets manager is probably required. I've been using Linux for a long time and hadn't heard about `keyctl`, thanks for mentioning it. A more flexible solution might be https://github.com/mozilla/sops
In a maximised browser window on a 13" screen it floats above and covers up the first few characters of each line of body copy, and it’s sticky, so it moves with me as I scroll. If I have to interact with it to read the article, it’s not unobtrusive.
That's not what I experienced. Is it just way worse for different locations (NZ here)? It's not opt-out by default; I had to turn off advertising and analytics cookies manually. The cookie consent pop-up takes up a third of a big mobile screen. As well as "Continue to site", it has buttons for "Policies", "Preferences", "Do not sell my personal information", and "Powered by CLM".
Cookie pop-ups need to die. Non-essential cookies aren't to "improve your experience", they're usually for invasive tracking, and if a site only uses strictly necessary cookies, even the GDPR doesn't require explicit consent.
That's strange. Maybe it defaults to opt-out only in the EU?
While it's not entirely unobtrusive, especially on mobile, it's far better than similar forms on most other sites that usually take up the bottom half of the screen, or block the content entirely behind a modal dialog.
> As well as "Continue to site", it has buttons for "Policies", ...
Those others aren't buttons, but links, and small ones. The most prominent element is the single button that dismisses the dialog and is actually safe to click. If you've seen most other consent forms, rejecting cookies (if possible at all) is usually done via a small link next to a prominent "Accept all" button and other such dark patterns.
So this is a great UX improvement, though I agree with you that all these forms are obnoxious and we should get rid of them. Unfortunately it's the best we currently have to mitigate this abuse legally (at least in the EU).
Also, I'd be fine with some non-essential cookies for e.g. analytics, as long as this is not shared with 3rd parties, so no Google Analytics and such, but few sites implement it that way.
Though TBH all this feels like privacy theater. There are more sophisticated ways than cookies for tracking users that aren't being discussed nearly as much, yet are probably already widely used.
> If you've seen most other consent forms, rejecting cookies (if possible at all) is usually done via a small link next to a prominent "Accept all" button and other such dark patterns.
That's exactly how this one works for me. The prominent "Continue to site" is "Accept all" in disguise, and to opt out you need to click on the little "Preferences" button/link. It's the same dark pattern as usual, not a UX improvement.
About the cookie:
just install the extensions `CookieAutoDelete` and `I don't care about cookies`. No more cookie warnings, no more privacy problems.
About the article:
TL;DR: use a secrets file.
Auto-deleting cookies means that you don’t need a filter list to block bad sites, every site will be wiped. There’s no chance that a missed site will sneak through.
The downside is that you will need to whitelist every site that you want to remember your login/settings for when revisiting them.
You can achieve that by just using incognito mode. Though the extension would surely be more flexible.
Still, I haven't used a browser extension in years, partly because of extensions like this that require access to all browsing data, even if they're open source like in this case (though the author of "I don't care about cookies" has a strange concept of distribution[1]). The inconvenience of wiping cookies and clicking through cookie consent forms is much more tolerable than allowing random extensions access to all my browsing data.
I use sops for my dot files. There’s some oddities if you want to commit only an encrypted one to git and use a sops.yaml but not a big deal. Hadn’t thought of using it for local only use in this sort of thing, but is a good idea
> Some operating systems still make every process’s environment variables world readable. (But, in all the Linuxes I’ve seen, /proc/<pid>/environ is not world-readable.)
A couple years ago this came up and someone made this claim, but no one could ever name an OS where this is the case. Maybe someone on HN knows one? :)
Oh oh I know this! Ultrix 2.2 in 1990 did this. Not via /proc, which did not exist in that OS, but via the ps command. From the Ultrix Security Guide for Users:
"Note that denying other users read permission [to your .profile] does not mean that they cannot find your PATH or any other of your environment variables. The -eaxww options to the ps command display in wide format the environment variables for all processes on the system"
Ultrix 2.2 was a BSD 4.2 variant by DEC. I doubt it was unique in this behavior, my guess is all BSDs leaked info in this way, but I don't have a reference handy. mkj's example in this thread of AIX suggests Sys V did it too. Modern Linux only shows you your own processes' environment variables.
(Young people of Hacker News: beware what operating systems you learn because 31 years later their idiosyncracies will still be burned into your brain.)
On Linux, the permissions on /proc/$PID/environ are the same permissions you'd need to read the memory of the process, so just being able to list that dirent doesn't give you anything. Though I do think that command lines being visible to other users is a design flaw.
Without considering resources or practicality, if we were to re-design computers and servers from security-first principles, what would features like management of secrets look like? Secure enclaves are wonderful but the secret still has to be propagated or used. A ground up computer design might greatly embellish on the idea of a secure enclave.
Can you really lay blame on the kernel though? All of this stuff with secrets is happening in userspace and mostly at the shell. If anything, systemd would probably be the place you'd want to build a secret storage system--perhaps build something API driven similar to Hashicorp's vault.
To be fair, it has only whatever is on-hand to use. And the burden of running in a lot of different environments. Apple, having control of the hardware, and a very specific/limited set of places to run, can do smarter things around enclaves, etc.
The usual problems apply: identity, authentication, authorization, roots of trust.
Imagine your enclave is a separate server on the network: how do you define which processes get access to which secrets under which circumstances? How do processes prove who they are to the enclave?
Maybe the idea of an OS with processes running on a single interconnected silicon is part of the issue, too. Just brainstorming based on your response.
In the programming language I’m working on, MethodScript, I’ve created a subclass of string, secure_string. The normal print value is “secure string”. Unlike somewhat equivalent classes in other languages, this is a subclass of string, which means it can be passed around opaquely to things that only accept strings, but then the functions that actually need to be aware of secrets can check for the subclass and call the special decryption method. The string is encrypted in memory, but this is really just obscurity, because the decryption key is there too. But it does prevent most logging from ever leaking the secret, including things like memory dumps. I wish mainstream languages had a feature like this, since MethodScript is still fairly far in the “toy” category for now.
> the functions that actually need to be aware of secrets can check for the subclass
I feel like this is an antipattern in OOP. Shouldn’t the functions just be defined to take in parameters of type SecureString instead?
I think this violates the liskov substitution principle (https://en.m.wikipedia.org/wiki/Liskov_substitution_principl...). For example, how is the “length” function or “startsWith” defined on SecureString? It seems like either data would be leaked via side channels, or the behavior doesn’t really conform to the String specification.
Yeah, that's a decent point. To answer your question, it's as if it's the string "*secure string*" for the purposes of other string methods.
I think this is a good point, and perhaps worth considering, but in most of my thought exercises, it's worth it, owing to the fact that when I use SecureString in other languages, I find myself having to decrypt the string more often than I would like. I guess one could argue that this is a deficiency of the libraries and such that are used, rather than a deficiency of the fact that SecureString doesn't extend String. I'm having trouble recalling a specific example right now, but I know I've run into it before, because that's what prompted me to implement this many years ago.
> Yeah, that's a decent point. To answer your question, it's as if it's the string "secure string" for the purposes of other string methods.
That’s a problem because you are enforcing all your functions accepting normal strings to either have to check if it’s NOT a SecretString or to return some nonsense. For exemple a function that returns the 2 firsts characters without caring about secret string could be passed one as argument and return « *s ».
The Liskov Substitution Principle states that a function accepting an Animal (a String) should never care about wether it’s a Cat (a Secret) and don’t fear Animal (String) methods to return meaningful values.
I fear that this design violates the so-called "substitution principle", which states you should be able to use instances of a subclass anywhere instances of the parent class are used. In this case, you have defined a default behavior, but I would argue it's degenerate, because the value of your subclass are all the same.
In general, it's better to design subclasses to be constrained versions of the parent. If you want to add functionality to a class you're better off using another technique like composition, or even just a simple utility method.
Another way to state the issue here is that a user of an encrypted string will always need to know that it is an encrypted string because the only thing cyphertext is useful for is feeding the decryption algorithm.
I don’t really see the value in considering it a subclass of a string. Without decryption, the only meaningful “string-like” operations on it are equality (well, even that’s assuming that like strings share encrypted values, or equality is stored in a separate property). Why even make it a subclass of string in the first place?
I've been using a combination of the Keyring[0] CLI utility, Direnv and environment variables to load secrets for specific projects from the macOS keychain into the environment. The advantage of Keyring over macOS own security tool is that it is cross-platform and your setup scripts will work fine for Linux users as well.
Dealing with secrets on a Linux desktop is quite a mess right now. E.g. for my SSH and GPG keys I use some concoction of gnome-keyring and gpg-agent, and I must admit I don't really understand how it works.
Himitsu[0] is an interesting approach to a secrets manager. It apparently uses an "agent" to do authentication/whatever and passes on the resource to the application that requested it, meaning applications don't have to touch the secrets themselves.
While the author is correct that you can leak secrets that other users could see, in modern systems you can avoid accidental leaks via containers or host OS modifications.
If you're using containers, only other users in your container will be able to see leaked secrets. Typically there's only one user in a running container. If an application in the container gets hacked, the whole thing is vulnerable anyway, leaks or not. Or if the container was running in privileged mode, which compromises host security. If someone gets access to your Docker host, or root access on the Docker host (privilege escalation to root is pretty trivial on Linux) then they can see all information, leaked or not.
If you aren't using containers, and just a regular Linux OS, a couple methods exist to harden process information between users (such as cmdline and environ) to contain most of the leaks.
What you do want to avoid is writing secrets to persistent storage, or filesystems that many different containers or users can access. You also want to prevent passing secrets to containers as environment variables, as your orchestration system might expose them to more users accidentally, and some logging systems might log them.
The best practice for passing secrets into a container is to have your container orchestration system pass credentials to the container, such as with an instance metadata service, or temporary volume-mounted secrets filesystems. The container would use that passed credential to then access a secrets manager.
No mention of `pass`? Certainly solves all my needs, and works well for version controlling secrets across a team.
The only annoying thing about `pass` is the generic name, so searching for information on it, or dealing with issues is a pita. Luckily, there are few.
Edit: I was wrong about what this article covers. Which is how to pass secrets to processes without leaking to `ps` or audit log.
Keeping the password storage in a gitlab repo makes it very useful for managing those secrets internally. Keep the list of public gpg keys for each team member, and a README, some helping initialization script for setting it up, and that's pretty much it.
Then, all cases presented where the command would either be used directly in the command line, or within another script, is just replaced with a call to
How does pass solve this problem? Surely entering "command `pass accountdata`" has the same problem that the secret shows up in ps as command's argument.
I see. I should have read the article more closely before commenting. I made a bad assumption based on the comments I saw here. The article is more about avoiding leaking the password than actually dealing with managing passwords in command lines.
A few comments on the actual article, which was a lot more insightful than I had thought:
- Protecting yourself from an audit logs is not really worth the effort, as the audit log should be treated as confidential as the secrets themselves.
- Utilities can themselves hide arguments, and will not show on the ps output. For example, `mysqlsh` shows as `--password=********`, as does `mysql`. I'm not 100% if modifying argv data has any effect on /proc, but then again, if you are protecting yourself from something that can access /proc (i.e. root), then nothing will work in the end. Protecting yourself from a rogue process that can monitor `ps` is also already a lost battle.
> if you are protecting yourself from something that can access /proc (i.e. root)
on Linux and all other Unix-like systems I know of, all users are allowed to list running processes and their command lines.
> Protecting yourself from a rogue process that can monitor `ps` is also already a lost battle.
ps is not a privileged command on most systems. POSIX says "On some implementations, especially multi-level secure systems, ps may be severely restricted and produce information only about child processes owned by the user."; "severely restricted" implies that this is not a common behavior.
furthermore, the article explains clearly how to hide secret arguments from ps. that's the main point of the article.
The difficulties mentioned in the article with passing secrets on the command line is one of the reasons why we wrote encpass.sh (https://github.com/plyint/encpass.sh). We had a similar need for a lightweight solution for managing secrets for simple shell scripts on our local workstations and in restricted environments. Bonus, it can be easily customized with extension scripts to adapt functions for your own specific needs. See our keybase extension for an example -> https://github.com/plyint/encpass.sh/blob/master/extensions/...
it's not the main point of the article, but curl -X is being misused here. as the manual explains, "Normally you don't need this option. All sorts of GET, HEAD, POST and PUT requests are rather invoked by using dedicated command line options." https://daniel.haxx.se/blog/2015/09/11/unnecessary-use-of-cu... reiterates the issue, along with a reminder that -X persists past redirects (hope your API doesn't redirect to a completion page).
after searching for "PUT" in the manual, I found "-T, --upload-file <file> [...] If this is used on an HTTP(S) server, the PUT command will be used. Use the file name "-" (a single dash) to use stdin instead of a given file."
Doppler solves this problem by storing your secrets in the cloud *hand wave*.
In actuality, the Doppler CLI (a Go binary) fetches your secrets from Doppler's API and injects them as environment variables into your specified process. That looks something like `doppler run -- printenv`. This prevents your secrets from being written to the filesystem in plain text, and prevents the environment variables from being available more broadly. In the case of docker, you would bake the Doppler CLI into your image, thereby sidestepping the documented `docker inspect` pitfall.
Of course, the CLI still needs a way of authenticating to Doppler's API. You authenticate and authorize the CLI by running `doppler login`. This initiates a login flow that you complete in your browser. Once completed, your newly generated auth token is sent back to the CLI. The CLI then saves the auth token to your system keyring for later use. The identifier needed to access that keyring entry is then stored in plain text in the CLI's config file (~/.doppler/.doppler.yaml), which is only readable by the user.
We're exploring other means of injecting your secrets into your application, as some users are wary of using environment variables. This is a challenging problem though as there are few means of injecting secrets that don't require substantially changing your application's logic.
I’ll add onto this: if at all possible, secrets should be short lived and rotated reasonably often.
Given enough time and enough secrets, it becomes asymptotically likely that a secret will become exposed. Ensuring that those secrets expire quickly is a good way to mitigate this.
Yes. If someone manages to gain a foothold on the machine running the process they would not instantly be able to elevate privileges. Of course the happier scenario includes no hackers on your machines, but why dig yourself a deeper hole?
It seems like an easy solution would be to use, say, Python instead of Bash. Nothing would then be inferrable from environmental variables or the output of `ps`.
> Local (unexported) environment variables are also easy to leak into ps output
...and then the variable containing the secret is expanded in a non-builtin simple command in the following snippet. It is taking a gun, pointing it at your leg, and pulling the trigger, very deliberately, if you know how POSIX shells work.
tl;dr - If worried about leaking the path to a secret file, use <() (e.g. <( < "${secret_file_path}")) which makes an opaque ephemeral file i.e. /dev/fdX on the command line (1)
Sorry for the top level comment, but I am surprised this hasn't been shared by someone else already. Process substitution(1) (e.g. <(SECRET_STUFF_HERE)) totally solves the problem of leaking secrets to the process table.
From RTFA, the author is using "$(< $STEPPATH/certs/root_ca.crt)" and is concerned about leaking "$STEPPATH/certs/root_ca.crt" to the process table. If the Author instead ran <(< $STEPPATH/certs/root_ca.crt) the process table would just have some junk ephemeral file path instead.
For example... lets say for an arbitrary example, I didn't want folks knowing from the process table that I was getting the file details of /etc/passwd
I have found this to be most useful not for files, but for temporary secret variables that I want to wrap in a file, but not have to deal with the cleanup and management of that file.
For a shell variable example, rather than writing "B64 is not encryption" to a file, and then having to cleanup that file... use process substitution to create the temporary file which doesn't leak to the process table.
## Don't do :
~ % echo "B64 is not encryption" | base64
QjY0IGlzIG5vdCBlbmNyeXB0aW9uCg==
## Do this instead; also 'echo' is a bash builtin and won't leak
~ % base64 <(echo "B64 is not encryption")
QjY0IGlzIG5vdCBlbmNyeXB0aW9uCg==
Or if used in scripting:
secret="B64 is not encryption"
bar="$(base64 <(echo "${secret}"))"
Some caveats: This only works for commands that accept files for arguments. If a command requires a secret to be passed in as a plaintext argument, then maybe the right answer is to rethink using that command in the first place (maybe <<<"HERESTRINGS" (2) helps in that case though I doubt it).
PROTIP: And for the love of all that's holy... run shellcheck(3) before considering running your script for realz if you want to keep your butt out of the fire. Also, the google shell style guide (4) is full of practical/good stuff. Shellcheck and the google style guide are pretty magical. I lost 10 lbs without dieting, became more attractive, and married my wife from following the guidance from those two resources. You can too! (caveat lector, ymmv, not legal advice (ianal), wife is already taken... sorry)
How is the first case ("Don't do :") in base64 example different from the second one? It seems to me to be exactly same thing. In both cases base64 reads from stdin and echo builtin is used. What am I missing?
I wrote a simple oh-my-zsh (but should be easy to port out) plugin to improve UX of the environment variables option [0]. It's basically a very simple secrets manager, allowing one to store env variables (or whole chunks of scripts) in GPG-encrypted files and see if any secrets are sourced at the moment.
Nice article, covers the basics well. Credential files seem like simplest way to go and are secure enough for most local uses. For anything more involved a secrets manager is probably required. I've been using Linux for a long time and hadn't heard about `keyctl`, thanks for mentioning it. A more flexible solution might be https://github.com/mozilla/sops