I came here to quote and comments on this paragraph. The poke at C/C++ undefined behaviour, claiming Go has no such thing. Except this sentence is exactly undefined behaviour: after the data race, due to possible corruption, anything could happen.
There is also equivalently smugness about the language in general. It is one thing to say there are no undefined behaviour related to the memory model, but given a compiler optimizer, you need to do hard analysis that there cannot be once optimized, with all operations re-ordering that can happen.
The simplicity of the Go memory model, which is vague on purpose, is taken as making it safe. Is it? It mostly mean it is vague, which provide a convenient cover to hide under.
Edit: toward the end, the author talks about adding documentation about compiler optimizations that would be explicitly forbidden. So some of my earlier comments would be addressed.
I don't know. A lot of the criticism of Go is just wild over-exaggeration. Is "tech-splaining" just setting the record straight? For example, the guy upthread is insinuating that Go is roughly on-par with C++ as they both have nonzero amounts of undefined behavior. Having used both languages extensively, I can say for certain that you will run into a lot more UB in C++ than in Go, and it will be a lot harder to debug when it happens. Similarly, the guy downthread who is making a mountain out of Go's nullable pointers (and all of the other people who complain about Go's type system)--yeah, they are strictly worse than Rust's enums, but the overwhelming majority of all software ever written has been built with languages that have nullable pointers and a good chunk of that is with languages that have fully dynamic type systems (not only is the type system going to allow your null pointer access, but it will allow all manner of type errors!). Is it "smugness" to put criticism into perspective?
If you look in the past history of language features that are being added or might be added (e.g. generics) you can find a whole lot of past explanation of why the language doesn't need those features in the name of simplicity and why the reader just doesn't understand the brilliance of Rob Pike.
Reminds me a lot of the Mac forums where every poor feature in an Apple product is explained with a "let me help you understand why you're thinking about it wrong" kind of answer -- up until Apple fixes the bug / implements the feature to the applause of the same people who were talking down why it would ever need to get fixed.
And just the tone of the article here turns me off in the way it begins with a bunch of quotes and philosophy. I actually agree entirely with “Don't communicate by sharing memory. Share memory by communicating" but I figured that out myself in probably 2003 writing crappy perl scripts that utilized parallelism but I wanted to aggressively avoid all the concurrency pitfalls. Actor models and Erlang later made completely intuitive sense to me. The principle is entirely correct, but its fucking weird that a programming language needs to have a list of "proverbs".
> If you look in the past history of language features that are being added or might be added (e.g. generics) you can find a whole lot of past explanation of why the language doesn't need those features in the name of simplicity and why the reader just doesn't understand the brilliance of Rob Pike.
Isn't this just nutpicking[0]? You have that with every language. I can criticize C++ on proggit right now and 3 or 4 people will respond "C++ has every language feature, just use the ones you want/like/etc and ignore the others, your problems are invalid!". Similarly, on an OCaml forum I can find a dozen people who tell me if Jane Street hasn't run into my problem before and solved it, then it's not a real problem and I'm dumb for trying to do it. I can post here or on r/rust (or proggit) and people will tell me that Rust is strictly faster/easier to develop with than any GC language because in the worst case you can always throw `Rc<T>` on everything. I can post on r/python about how hard it is to optimize Python and people will tell me I'm dumb and I can just use multiprocessing, rewrite the slow bits in C/Cython/etc, use numpy/pandas/etc. I can merely register for an account on a Java forum and be berated for my low intelligence. :)
> The principle is entirely correct, but its fucking weird that a programming language needs to have a list of "proverbs".
I don't feel as strongly as you do, but I agree that proverbs and analogies are pretty low quality and invite more confusion than they address. Probably only a rung above analogies and a rung below quotations.
I don't know what is meant by "sloganeering" but if writing a full, nuanced article (e.g., TFA) fits the bill then I'm not sure I agree with your conclusion...
In other words, I agree that "sloganeering" attracts the nuts, but I'm not sure I agree that TFA is sloganeering.
If the TFA amounted to "don't communicate by sharing data; share data by communicating <mic drop>" then yeah, I would be on your side, but the author went to the trouble of writing a 4K word essay to support his points. That you and others reduce it to mere slogans isn't a valid criticism of TFA IMHO.
To your point, there are actual people in and outside of the Go community who do this kind of lazy argumentation. For example, someone upthread said (and I'm hardly paraphrasing), "Hoare called nullable pointers a billion dollar mistake and Go has nullable pointers so... <mic drop>".
> the focus on "simplicity" also is both a good thing, and a great way to shut down any discussion.
Pretty sure you could levy the same criticism against the Java folks for "configurability" or the C++ folks for "performance/control" and the Haskell folks for "type safety". It still sounds like you're nutpicking rather than observing something unique to the Go community.
I don't think generics are a good example, as from what I understand their stance was "We haven't find a good way to put them in Go, but we understand that you might want them. We'll try some things and see which fits with Go the best.". That got warped to "you don't need generics anyways" by some overly zealous people. Maybe a better example would be the type system, with Rob Pike saying something about taxonomy being boring?
I myself like the "proverbs". It's short, simple ways that get me to understand what frame of mind I should be in when writing or reading a particular language.
Sometimes people don't actually want help, they just want to complain. Other people misinterpret this as a question or a debate, so their explanatory responses are received negatively. You'll find that this also applies to C++, Rust, every programming language, and anything in general.
I'm a bit tired of reading motivated blog posts like this that bring up enough related work that you know they know solutions exist to the problems they're bringing up, but for some reason felt it was not necessary to bring up those solutions. This line in particular is plain false:
> None of the languages have found a way to formally disallow paradoxes like out-of-thin-air values, but all informally disallow them.
The post neglects to mention the Promising Semantics paper of 2017 that resolves the out of thin air problem to (as far as I know) everyone's satisfaction, despite pointing out the previous work that brought up the problem. Similar things are true for ARM's memory model, etc.--this is all stuff that's been mostly resolved within the last few years as proof techniques have caught up with compilers and the hardware. Ironically, the thing that's been hardest to formalize by far in a useful way (outside of C++ consume) is--surprise, surprise--sequentially consistent fences!
It also handwaves away as some unimportant point the reason why compilers provide things like relaxed accesses--it's not just (or even primarily) about the hardware, but about enabling useful compiler optimizations. Even if all hardware switched to sequentially consistent semantics, don't expect languages that aim for top performance to abandon weak memory. And personally, I think it's ironic that at a time when even Intel is struggling to maintain coherent caches and TSO, and modern GPU APIs don't provide sequential consistency at all, people are trying to act like hardware vendors will realize "the error of their ways" and go back to seqcst.
I had not looked at the Promising Semantics paper of 2017. Thank you for the reference.
That said, what I have learned from watching this space for a decade is that even formal proofs are no match for the complexity of this general problem. You have to get the definitions right, you have to prove the right statements, and so on. There is still plenty of room for error. And even a correct, verified proof is not really an insight as to why things work.
Experts were telling us in 2009 that OOTA had been resolved to everyone's satisfaction, only to discover that it wasn't, really. Maybe Promising Semantics is the final answer, but maybe not. We need to get to the point where everything is obvious and easy to explain, and we're just not even close to that yet.
Looking at citations of Promising Semantics in Google Scholar I found this Pomsets with Preconditions paper from OOPSLA 2020: https://dl.acm.org/doi/abs/10.1145/3428262. It contains this sentence:
"As confirmed by [Chakraborty and Vafeiadis 2018; Kang et al. 2018], the promising semantics [Kang et al. 2017] and related models [Chakraborty and Vafeiadis 2019; Jagadeesan et al. 2010; Manson et al. 2005] all allow OOTA behaviors of OOTA4."
I take that to mean the jury still seems to be out on excluding all OOTA problems. Maybe the canonical example has been taken care of, but not other variants. And again we don't really understand it.
Behavior of a program after data (as in value) races can be defined and constrained.
If you have wild pointer writes (aka arbitrary memory corruption) and almost any non-trivial control flow, the results are pretty much undefined. As in, it is pretty much impossible to specify or reason about what a program will do.
The article:
> ... such races can in turn lead to arbitrary memory corruption.
Tearing writes themselves are not undefined behavior, but use them wrong and you're off into undefined territory, with no way back.
> Undefined behavior is .. because of the optimizations.
Optimizations are popular, but it doesn't matter why the behavior is undefined.
It's not "undefined behaviour". It's "implementation-defined behaviour".
Undefined behaviour would be, if the compiler thinks that it always happens, it (the compiler) can do anything, including e.g. transforming the whole program into
int main() {
return -1;
}
In Go's case, the compiler performs no such shenanigans - it just compiles your code as-is, and it's your responsibility to make sure it's correct - but exactly because it's not undefined behaviour, the analysis is much simpler than in C/C++ - there's no funny business going on in the compiler, and therefore all effects of the data race are local (in code) (i.e. no "nasal deamons").
There is also equivalently smugness about the language in general. It is one thing to say there are no undefined behaviour related to the memory model, but given a compiler optimizer, you need to do hard analysis that there cannot be once optimized, with all operations re-ordering that can happen.
The simplicity of the Go memory model, which is vague on purpose, is taken as making it safe. Is it? It mostly mean it is vague, which provide a convenient cover to hide under.
Edit: toward the end, the author talks about adding documentation about compiler optimizations that would be explicitly forbidden. So some of my earlier comments would be addressed.