> But let's take that argument at face value: then why _not_ assembly if this is the case?
Okay, then you can throw away the rest of your post and stop right here. The overwhelming historical evidence is that assembly doesn't scale.
> That's an argument for Go to not have types.
Sorry, that doesn't follow. Is the logic here just because I mention Smalltalk, that I'm advocating late binding and the only type being Object for Go? Sorry, but that doesn't follow. The argument is that Go doesn't need a more complicated type system to avoid problems with heterogeneous collections -- because practice shows that even a simpler one can suffice.
> Frankly I find it bizarre that people claim that they have found type errors to be trivially fixable, because the scope of where a type error can be introduced is enormous in an untyped language...
Sounds like you're invoking freshman level false "common knowledge." Have you ever worked in an "untyped" language in a real project? What if a project simply used runtime asserts? Then a type error in a heterogeneous collection would be caught in unit testing. If it got out to production, it could be easily caught and logged. In 15 years of Smalltalk industry work I never encountered the kind of heterogeneous collection type error you're referring to in production. The closest thing I can recall involved the heterogeneous typed reuse of a local variable. (Which is simply bad coding style in Smalltalk.) In Go, you have a type system that provides much more feedback at compile time, and workable mechanisms for detecting the problem at runtime. So at least in this one instance (heterogeneous collections) there is arguably almost no practical benefit to parametric polymorphism.
(P.S. Technically speaking, Smalltalk is strongly typed with message passing semantics for methods implemented through late binding. It's not "untyped.")
> Okay, then you can throw away the rest of your post and stop right here. The overwhelming historical evidence is that assembly doesn't scale.
Huh? I'm not actually arguing that assembly is a scalable language. I'm invoking a counter-example to the idea that a smaller cardinality of concepts is inherently a good thing. Assembly has a smaller number of concepts than Go, so by the espoused benefits of having a language with less features, assembly should be favored. But obviously this is not true, so I debate that Gophers actually ascribe to this version of "simplicity".
My point here is that Gophers need to examine their rhetoric a little more and get better at honing their definition of "simplicity", since its clearly not just having less "stuff" as Rob Pike seems to claim in every Go presentation.
> Sorry, that doesn't follow. Is the logic here just because I mention Smalltalk, that I'm advocating late binding and the only type being Object for Go? Sorry, but that doesn't follow. The argument is that Go doesn't need a more complicated type system to avoid problems with heterogeneous collections -- because practice shows that even a simpler one can suffice.
Your original question was how does unsafe casting introduce cost. It adds cost in exactly the same way that every other means of circumventing a type system or not having a type system introduces cost: it allows runtime errors to occur at points where data is illegally used.
Type systems are effectively proof solvers. Just like making an improper assumption in a logical proof can lead to a faulty conclusion, forcing a type system to assume a type for a value that it cannot prove can lead to a buggy program. This is why programmers who strong believers in static type checking take issue with casting: its a way of circumventing the protection that a type checker gives you, when instead you can add power to the type system for expressing your constraints or add means of showing the equivalency of different types.
> Sounds like you're invoking freshman level false "common knowledge."
There's no need to get personal here. I'm making a factual point: it's true that any code path calling into the point where a type bug occurs is potentially responsible. Nothing in my comment is invoking "common knowledge". Also you should keep in mind that invoking your "personal experience working on X large scale system in industry" is not a compelling argument. It's not even a comparative argument about an untyped language vs a statically typed language.
> Have you ever worked in an "untyped" language in a real project? What if a project simply used runtime asserts? Then a type error in a heterogeneous collection would be caught in unit testing. If it got out to production, it could be easily caught and logged. In 15 years of Smalltalk industry work I never encountered the kind of heterogeneous collection type error you're referring to in production. The closest thing I can recall involved the heterogeneous typed reuse of a local variable.
Yes, I have. I've worked in Python and Javascript for a couple large projects. I'm not going to get into my feelings about this, because I don't think it forms the basis of a compelling argument.
However, I take issue with the claim that these kinds of bugs are always trivially caught in unit tests. One thing to note about untyped languages is that they allow an infinite number of values to be passed to a function by virtue of being untyped, so there's no way to write an exhaustive unit test. This isn't unique to untyped languages (for instance, I can't write an exhaustive unit test in haskell for a function that accepts strings), but a sufficiently expressive typed language always gives me the ability to reduce the scope of my tests by writing more constrained types (for instance, with sized collection types using Data Kinds in haskell). Similarly, languages that disallow parametric types cannot express constraints about contained values in a type, which allows exactly the same sorts of bugs that an untyped language can have.
Unit tests are great, but they are better suited for probing the edges of acceptable inputs based on assumptions about the code under test, and are generally poorly matched to providing the guarantees of a type system. They are not perfect: they can suffer from laziness, code rot, faulty assumptions, etc. I've seen bugs in test code far more frequently than I've seen bugs in a type checker (in fact I don't ever think I've seen a bug in a type checker).
My argument here boils down to the fact that you can trivially show there's potential for human error here that a type system can protect against. The point of contention you have is that these kinds of bugs don't manifest in practice. In my experience they do, and they occur more frequently in larger scale systems where there's more invariants to juggle that a type system doesn't ensure for you. I'd also argue that this largely explains the resurgence of typed languages with more expressive type systems (like scala, rust, swift, idris, hack, etc.). Ultimately I think we just have to agree to disagree here.
> (P.S. Technically speaking, Smalltalk is strongly typed with message passing semantics for methods implemented through late binding. It's not "untyped.")
Untyped is commonly used in academic literature to refer to "dynamically typed" languages[1]. The strong/weak typing distinction is arguably imprecise or a useless distinction, especially for dynamically typed languages. For example, how does smalltalk prevent "type punning" when functions do not declare the types of values they may be called on? Perhaps you can make the argument that dynamic languages like these can justify their claim to "strong typing" by having builtin operators that do not make implicit conversions of the values they work on, but this guarantee doesn't hold in general in user defined code, so it seems like a useless distinction.
Okay, then you can throw away the rest of your post and stop right here. The overwhelming historical evidence is that assembly doesn't scale.
> That's an argument for Go to not have types.
Sorry, that doesn't follow. Is the logic here just because I mention Smalltalk, that I'm advocating late binding and the only type being Object for Go? Sorry, but that doesn't follow. The argument is that Go doesn't need a more complicated type system to avoid problems with heterogeneous collections -- because practice shows that even a simpler one can suffice.
> Frankly I find it bizarre that people claim that they have found type errors to be trivially fixable, because the scope of where a type error can be introduced is enormous in an untyped language...
Sounds like you're invoking freshman level false "common knowledge." Have you ever worked in an "untyped" language in a real project? What if a project simply used runtime asserts? Then a type error in a heterogeneous collection would be caught in unit testing. If it got out to production, it could be easily caught and logged. In 15 years of Smalltalk industry work I never encountered the kind of heterogeneous collection type error you're referring to in production. The closest thing I can recall involved the heterogeneous typed reuse of a local variable. (Which is simply bad coding style in Smalltalk.) In Go, you have a type system that provides much more feedback at compile time, and workable mechanisms for detecting the problem at runtime. So at least in this one instance (heterogeneous collections) there is arguably almost no practical benefit to parametric polymorphism.
(P.S. Technically speaking, Smalltalk is strongly typed with message passing semantics for methods implemented through late binding. It's not "untyped.")