I honestly don't really care how "unreliable" you think shared libraries are, using entirely static linking is how we get gigantic 500MB monoliths that waste disk space and RAM, don't integrate with the rest of the system properly due to mismatched library versions, and also don't get security patches from the system unless you manually update the binary itself.
Static linking may be "easier", but as programmers, it's our job to use the *right* solution, not just the easiest one.
Static linking may not matter on your development rig, with lots of spare disk and memory, but what about someone running your code on a netbook from 2009? Can your program even fit on their computer? Can they use multiple programs at the same time, or do they have to close everything else to free enough memory? What if they don't have fast Internet access, so they can't download the same security patch 50 times? These are all things you have to consider as a software engineer.
Dynamic linking might not be perfect, but it's the *right option to use*. Rust, Go, and other such languages need to start supporting it, unless they want to join the developers forcing people to buy newer, more powerful, more expensive computers every year. (If they can even afford it - if they can't, they're just stuck with barely being able to use their computer.)
Gonna prefix this w/: I fully sympathize w/ you.
My own experience is: if the Rust world embraces code sharing/reuse as we want, it will not be in the form of dynamic linking as implemented today. It would be done "busybox-style"- have a super append-only binary for all libs/deps.
Parametric polymorphism and Rust's unstable ABI means your application and library source code can't really vary independently. Changes in lib affect app and vice-versa.
On the internet, everyone knows you're a cat — and that's totally okay.