Noob package/library question

Please excuse my ignorance, but I’ve been googling around on this topic for a day or so now and have not found an answer yet, so I am posting here in the hopes some one can shed some light on this.
As a C coder it is common to create a library for functions that would be used by multiple applications, such as static myutils.a or shared my myutils.so. And these types of libs would typically be put in a lib directory at the same tier as the various apps that linked to them. So a typical tree might look like:
/app1/somefile.c
/app2/otherfile.c
/app3/morefile.c
/lib/file1.c, file2.c, file3.c compile into myutils.so or myutils.a

then any or all of the apps could link to the myutils.a or myutils.so in the lib directory by setting the LD_PATH. I found this to be a common method used in several companies I’ve worked at over the years for building applications in C and CPP. And also I’ve found it to be a very comfortable development model for application compiling so I was trying to emulate this method in Golang and found it quite problematic. In particular creating the actual libraries out of packages seems to be a bit of an ordeal. I have read through a few docs but to no avail. I did come across a few docs on linking shared package files, but those links were inaccurate and did not work. Can anyone tell is this is possible? and if so could some one point me to a relevant link or document. I do appreciate any help you could give on this.

Hi @apglaser,

Welcome to this forum!

Go is not really made for creating or using shared (or dynamic) libraries. Instead, you would rather share the source code itself as a Go module via a public or company-internal repository. Others can then import the packages from your module and compile a single, static, self-contained binary from this.

Advantages:

  • No shared lib version hell. (“Darn, my app needs v1.2.3 or better but the system has v1.0.7 installed…”)
  • Static binaries have no dependencies and are thus dead easy to deploy
  • Shared libraries are bound to the specific platform (read: OS and architecture) they were compiled for. To support multiple platforms, you would need to create and maintain an extra lib for each of them. X OSes times Y architectures times Z library versions = hyperventilation. Not so with source code packages.

See e.g. https://golang.org/doc/tutorial/getting-started#call about how this works.

Christoph, Thanks for the response and I do understand where you are coming from I spent quite a few years myself coding and cross compiling C and C++ for the Arm7, so I am more than familiar with all of those anxieties. I just started playing with Golang in the last few months and recently I built a relatively small and simple app and compiled it statically for an Orange Pi Zero. When I scp’d the binary over to the target I almost fainted as I saw the size of it. I have not looked into trying to strip the final binary, so I suspect that might be my next step in the size battle of bloat.
Also from a project management point of view I am a bit more accustomed to keeping what Go would call packages in their own little local directory and building it as a library of sorts either a statically built “.a” or a dynamically built “.so”. In my searches I have come across some conflicting reports on the best way to accomplish that. I tried “go install”, but had issues with that in that it seems to install the package in a system directory which I did not have access to write. Then I tried to force it to build it locally and that also did not seem to work. I have played around with “go mod” and “go tidy”, and had issues with them working as I was hoping they would.
Regardless of statically or dynamically linked what I would like to do is have a directory where a collection of package files live and compile them into a little library right there in that directory. Then in another directory I would like to keep the files used to create main and import functions from my library. Although I have come across several examples of this on the internet I have had very little success getting it to work. Which I though was strange because in C and C++ it is a very very straight forward process. I think I read that Ken Thompson and Robert Pike were two of the main drivers behind Golang so I sort of figured that this kind of thing would be a no brainer. Apparently not.
None the less Christoph I do appreciate your taking the time to post a response, and I shall endeavor on in my efforts

I might be totally wrong with what you are trying to achieve but the mainstream Go binary is not way to go (no pun intended) if you want to compile for embedded devices, mainly because it packages the runtime with it. I would suggest you take a look at TinyGo which piggy-backs on LLVM to cross compile to a reasonable sized binary.

Ashish, Thanks for that info and link, that may very well provide a lot of help. I think that could be half the battle.

Prior to Go v1.11, packages identified in the include(…) block were automatically located using the GOBIN and GOPATH environmental variables. You can still use that approach by setting GO111MODULE=off. (No need to name included packages in the “go build” statement.) Check out the documentation on environmental variables.

Charles, I do appreciate your help and I will look into them and see where that takes me.

I see your points.

Binary size is not so much an issue for server-side apps, nor for typical cli tools, which are the two main realms of Go. (Docker, Kubernetes, Caddy Web server, Traefik, CockroachDB, all the cool Hashicorp stuff - it’s all written in Go.)

If small binaries are a requirement, Go is perhaps not the right choice.

Have a look at, for example:

  • TinyGo, “A Go compiler for small places”
  • Any system programming language (that usually have no GC nor large runtimes), like Zig or Rust

Regarding keeping locally compiled libraries available for building an app, Go already does that for you, automatically.

Whenever you import a package, Go takes care of fetching the enclosing module from the remote repo, caching it locally, and even pre-compiling it for you. No need for manually managing any of that.

(Do ls $(go env GOPATH)/pkg and inspect the files under the directory named after your current combination of OS and architecture, like, e.g., “darwin_amd64”. There are all your imported packages, cached and pre-compiled.)

You do not have to mange local compiles of libraries by yourself anymore. And this is why you have not found an easy way to do it manually - it is simply not meant to be done that way in Go.

There is no need to go back to the old and flawed GOPATH single-workspace model. GOPATH still exists internally and caches all packages that your projects imported.

The Go Modules way of keeping source packages locally is to use replace directives in go.mod.

GOPATH was never a ‘single’ workspace for me. I’d have it reference a list of local workspaces - at minimum, the one I was working in and a common workspace where I ventured Git packages and promoted utility packages I’d developed. (Configuration still works with Go 1.16 when I go back to it.)

Indeed, GOPATH can contain multiple paths. But that’s not the point. The Go Module system provides so many advantages over the old GOPATH system that I would not recommend anyone to stick with GOPATH.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.