I always have a hard time understanding the arguments of people who
strongly oppose dynamic linking. It often gets emotional real quick, and
then this or that "sucks", but nobody knows anymore why it sucks.

Now that we have Go and Rust, which do static linking by default, the
issue becomes more visible.

(Honestly, me personally, I have a purely pragmatic problem with static
linking: If I were to rewrite all my tools and scripts that I have in my
~/bin in Go or Rust, which are about 300, I would need, like, what, a
couple of *gigabytes* to store them? Assuming that each resulting binary
is a couple of MB in size. This really does not feel right to me.)

So, I always wondered why Go and Rust do that anyway. They don't really
provide any arguments, it's not part of their marketing -- or I'm too
blind to see it. Is it even intentional? Is it because Go tools are
"modern" and "meant for containers" or something like that?

Now, this blog post has an idea:

<https://blogs.gentoo.org/mgorny/2021/02/19/the-modern-packagers-security-nightmare/>

> In modern packages, static linking is used for another reason entirely
> — because they do not require the modern programming languages to have
> a stable ABI. The Go compiler does not need to be concerned about
> emitting code that would be binary compatible with the code coming
> from a previous version. It works around the problem by requiring you
> to rebuild everything every time the compiler is upgraded.

Huh. Again, I cannot find "proof" of this claim (because Go and Rust
simply don't explain why they do things the way they do it -- or, again,
I can't find their explanation). But it's somewhat compelling ... "Hey,
we want to use this stuff in containers and, oh, look, if we just to
static linking, we solve the ABI problem of our very young language,
to!"

Maybe. Hm.