The ABI (all registers caller-saved, parameters and return values unconditionally passed on the stack) is quite simple compared to C. Is it still in flux, or is it considered done?
Of course the Go team is aware of this, but once you support dynamic linking, ABIs become very hard to change. A bad one will weigh you down for a long time! Apple famously used PC-relative addressing on PowerPC, which has no program counter register. It was, let's say, measurably suboptimal, but they were stuck with it until the Intel transition (or technically the PPC64 one).
The ABI seems to sacrifice performance for the sake of simplicity, which IMHO has little value at such a low level. There's a reason why the x86-64 ABI passes 6 (4 for the MS variant) machine words via registers! Disappointing.
Both D and Go use an unusual convention. The GCC maintainers raised issues over D's ABI during discussions regarding integrating the GCC-based D compiler. I wonder how did this apply to Go's inclusion? Or did GCC support the Plan9 ABI before Go's inclusion?
The author next a small comment that dynamic linking will be making its way into Go sometime soon. Right now, with go being fully staticly link you always know all of the code that you link against, making it easier to dive into. Does anyone know what the Golang team is thinking about when it comes to dynamic linking?
I was recently surprised by this, when I tried to run a simple go web server in a chroot -- Apparently go defaults to dynamically linking to the host c libraries for domain lookups (partly this has to do with it being hard to do this with static code on OS X apparently).
The upshot is that while "go build" might be expected to create a static binary (that should be easy to run in a chroot) -- in practice it won't unless you manually rebuild your go toolchain, passing in a parameter to avoid cgo/linking to c libraries for name lookup.
It's my understanding that you have to do this on Linux as well, and it isn't enough to just build your project with CGO_ENABLED=0, you'll also have to rebuild the go-toolchain with CGO_ENABLED=0 for it to work. But perhaps this has changed in recent versions of go?
Eg:
cd /tmp
git clone git@github.com:apg/wipes.git wipes.git
cd wipes.git/
go get github.com/gorilla/websocket
go get github.com/gorilla/websocket
go build
ldd wipes.git
# > libc.so.6 among others
export CGO_ENABLED=0
go build # makes no difference, produces identical binary
Afaik, in this case it's 'net/http' that pulls in a dependency on cgo
(again, unless we rebuild the whole go toolchain with cgo_enabled=0).
Not that this is terrible as such, but it's a gotcha if you want to run
the resulting binary in a chroot, or on a host with a different version
of libc...
By that criteria Go doesn't have any support for DLLs either at the core language level, it just happens to export some Win32 API functions (LoadLibrary, GetProcAddress, etc) via goc (like cgo but specific to the Go bootstrapping build process) in the win32 version of the syscall package that allow you to do your own DLL late binding.
Goc is nothing like cgo. Goc is a simple preprocessor that allows writing C code with Go function declarations, and it will transform said code into C code compiled by 6c using some rules that are somewhat hard for people to do by hand and not screw up. For all practical purposes, goc is not required, people can write the C code by hand, it bears no effect on linking and on the ability to use code in external objects. Also, it's seldom used, most of the code is C code, not touched by goc.
The Go linker does know how to link with external objects and use their symbols, and it does this without requiring the external object to be present at link time. It can do this, because the most basic relocations don't require any knowledge about the object. This was previously supported only on PE-COFF, as it was previously used by the Windows target, but now I have extended it to ELF too, and it's used on the Solaris target. In principle, it works on every other ELF platform too (Linux, *BSD).
Note that in all of these cases ABI translation needs to be done, and the internal machinery in the runtime that does this is in motion, just like with cgo. There's no difference in the runtime performance envelope, the only difference is that it doesn't require a target toolchain, like cgo does.
In short, The Go linker can link with ELF shared objects just fine, just as it can link with PE-COFF shared objects, but it supports only the most basic of relocations in both cases (and this is a property of the design, as we can't expect the presence of the shared object because that would break cross-compilation), and it's not what you probably want anyway.
As I understand it, dynamic linking is not what is coming soon. What may be coming is the building of a Go program as a shared library, with its interface to the rest of the process likely being through cgo. This was motivated by someone targeting the Android NDK.
The Go calling convention is described correctly, but the runtime is largely C (compiled by a modified Plan 9 C compiler), and C compilers return values in a register, not on the stack.
The Plan 9 ABI is significantly different. Yes, the compilers come from Plan 9 (Inferno, actually), but the calling convention was modified. E.g. on Plan 9 first argument is usually passed in register.
Of course the Go team is aware of this, but once you support dynamic linking, ABIs become very hard to change. A bad one will weigh you down for a long time! Apple famously used PC-relative addressing on PowerPC, which has no program counter register. It was, let's say, measurably suboptimal, but they were stuck with it until the Intel transition (or technically the PPC64 one).