Is there some actual proof that Go code is less prone to runtime errors or more maintainable or easier to debug because of this?
Surely, if Go's method of error handling is superior to every other programming language, there must be some meaningful statistical side effects that can be observed to prove it right? Lower rate of runtime errors, faster time to debug errors, etc. Someone out there in academia must have been able to actually measure this effect if it exists?
If not, can team Go stop going on and on about how superior their excess keystrokes are?
You would probably see a bunch of blog posts about 'how we saved 3000% of our SRE time by switching from Go to NodeJS'.
Have you seen many of those?
Aside from this it's probably quite difficult to measure! Sure there are a bunch of metrics you can capture, but unless you're really doing a randomised controlled trial, you will have an absolute dog's breakfast of confounded observational data - good luck getting much useful out of that.
Hey, it would make for an excellent doctoral thesis for anyone in a CS program interested in programming languages and productivity...and a great gateway to get hired by Google!
I don't think it would be hard to measure at all.
Randomize some CS students into Go and non-Go and see which teams produce projects with lower rates of defect and faster time to submission. Try it with teams in enterprises; I'm 100% certain that this kind of statistical evidence is meaningful to some CTO/CIO somewhere. Prove to the entire world that everyone should be using Go because it will lead to cost savings not just in time to market, but also less downtime and defects. The financial benefits would be enormous! Possibly billions if not trillions of dollars of productivity gained, system downtime avoided, and runtime defects eliminated ("what runtime defects?") because of the inherent superiority of:
> Randomize some CS students into Go and non-Go and see which teams produce projects with lower rates of defect and faster time to submission.
Using students still has problems - have they coded before? Have they worked in a team with industry-style review processes before? The population you're sampling is quite different to experienced software engineers. It's not quite "...in mice" levels of qualification needed, but it's fundamentally the same problem.
> Try it with teams in enterprises; I'm 100% certain that this kind of statistical evidence is meaningful to some CTO/CIO somewhere
Difficult again - how do you do a fair comparison? You'd need to be doing fair randomization - in particular as soon as anyone gets any say in which team uses what language for which project, you have a good chance of stuffing things up. Is one of the projects more important to the business, so is likely to get a "better" team allocated to it? Does one of the teams have a language preference? Does the business have a language preference? Is one of the projects well-suited for Go (say a web server) and the other terrible? (say a desktop app)
One approach might be to get two teams to build the same thing and one gets thrown away, but what CTO is going to pay for that once? Let alone finding enough to pay for a big enough sample size to make anything of.
I'm not trying to get into fights about languages, I'm trying to make the point that actually making an objective measurement like you suggest is really, really difficult.
> Using students still has problems - have they coded before?
Why would it matter if we're doing randomized experiments with students placed in groups of different languages? That's how randomized experiments work: it doesn't matter if they've coded before because the selection process will randomize a statistically relevant number of students.
Errors do not only happen because of code defects, they are often a response to bad data. And even if there were fewer defects, that could be because of other features of Go, not the error handling.
So that doesn't seem like a good metric to determine if Go's error handling is "better" (in some dimension) than other techniques.
Bad data isn't unique to Go; let's assume bad data is an external constant.
If Go is a superior programming language -- error handling or not -- it should be easy to find the statistical evidence. Otherwise, you just prefer green crayons; please, by all means, go enjoy your green crayons.
Go's error handling isn't any better or worse; it's just objectively way more verbose. With what benefit? If you wish to claim one beyond your subjective vibes, please by all means prove it with evidence.
The topic was not determining which language has fewer defects. It was specifically about whether the error handling aspect is better.
I'm not claiming any benefit by the way of Go's error handling.
I've never even used Go. Just pointing out that measuring defects is not a good way to determine whether it is or isn't.
> It was specifically about whether the error handling aspect is better.
If it's better, how? Can you prove it? Does it lead to lower defect rates? Lower time to debug? Less runtime errors? How is it better other than your preference of green crayons?
Prove as in statistical evidence that code and teams working with Go yield software that has lower rates of runtime defects, has faster time to debug runtime errors, has improved the productivity of developers because of its obviously superior design and philosophy of error handling, etc.
These are all measurable. Productivity is measurable. Defect rates are measurable. Time to debug is measurable.
If there's a peer reviewed paper, let's see it. If there's not, you should produce it and get hired by the Go team!
If this is indeed the superior way of handling errors:
and not just excess key strokes and preference of style, then there will be some statistical side effect that can be measured to prove its superiority. If you can't prove it, all you're saying is "I prefer purple because purple is obviously superior to blue! Can't you see!"...O...K.
You're being pedantic, and as is often the case when people are pedantic you're also wrong [0]. Their use of the word "prove" is entirely consistent with standard definitions, and "demonstrate" is listed as a synonym for "prove".
> Can you demonstrate Go's method of error hanling for a specific context is superior? Yes, you can.
Then please do! Articles like this one are entirely subjective and anecdote-oriented, not evidence-based proofs (or demonstrations if you like).
Surely, if Go's method of error handling is superior to every other programming language, there must be some meaningful statistical side effects that can be observed to prove it right? Lower rate of runtime errors, faster time to debug errors, etc. Someone out there in academia must have been able to actually measure this effect if it exists?
If not, can team Go stop going on and on about how superior their excess keystrokes are?