I am trying to understand how go supports kernel having different ABI version, particularly in the context of cross-compiling. More specifically, if I need to use something only available in a more recent kernel how can I make sure the cross-compiler targets the correct kernel version?
The same question can be asked for standard go packages; how can we leverage a new capability of an existing kernel subsystem that a given package supports. An example of such, from the past, is when IPv6 was added. Go did not exist at the time but lets assume it was: I guess one would had to wait for the net package to support IPv6. But then how would cross-compile to the correct kernel version would had worked?
Well, this is working around the issue. But what if you must use NPTL? (Or IPv6 in my hypothetical example). A more recent example would be, what if you need to control the host/device mode of a USB-C port? For this, you need kernel 4.14 (or around that, not sure the exact version).
It depends on the kernel source codes you have and how do you distribute your application. There are few ways to do it:
If you’re distributing your application source codes to the target, normally you just update the source code and recompile the application if new features are available. There are many existing tools (package managers) already facilitate such activity like Flatpak, apt, pacman, etc.
On cross-compiling, normally we switch the kernel source codes to the targeted version and pre-compile it before we cross-compile the intended applications. In other languages (from Go), there is usually a Makefile to automate this. For details, you may need to specify the type of kernel, target processor & OS, etc.
Have a target machine converted to your build machine and do No.1 in it as part of your build CI pipelines.
As far as I understand, Go binary belongs to the application layer, which should be independent of interacting with kernel. They are 2 different layers on the software stack diagram. Managing kernel and managing Go application are completely independent of each others. Hence, cross-compile against a kernel should not directly affect Go compilation (exception to drastic changes to ABI).
In term of new kernel feature support, on Go side, it is more of checking the new feature I/O port (IPv6) before use and fallback mode (IPv4?) if the checking fails. Obviously, the new feature will be delivered in new module package, either externally or getting merged into standard library. The former is always encouraged first before requesting for merge (refer to the contribution guidelines).
Go for kernel development is still experimental so I’m assuming you’re not talking about that domain.
First, I forgot to mention that I am not even new to Go as I am looking at it to see if I can use it; I never use it so far… but people are telling good thing about it so I am checking it!!
Concentrating on binary distribution (in container) and therefore cross-compiling it is clear that referencing the correct kernel sources (actually, just headers) is a must. In C, or C++, the kernel source is part of the cross-tool chain build “sandbox”; you do not rely on the local host sources for obvious reasons. I do not see that in Go’s cross-compile documentation on golang.org. I do not see the kernel source in the git repo of the go compiler either (may be I miss it). Given the fact that the cross-compiler targets a wide variety of OS (Linux, Windows, etc.) there must be something somewhere but I fail to see it. (Windows is probably even worst than Linux when it comes to kernel version, but this is just a guess). So, the question for option 2 is how to “switch” the kernel source? On Linux build machine, how to “switch” to the correct Mac OS kernel that you want to target? Or vice-versa.
Of course option 3 is possible, unless you target a small environment like a raspberry Pi with a real application (not just a toy). But this option does not integrate well into a high quality CI infrastructure; cross-compilation is much better in such context.
Ultimately, the doc on golang.org is incomplete. It needs to be augmented.
On the GO binary and package aspect; you are right, normally the kernel is totally abstracted. In C/C++ this is achieved by the fact that the local libc.so is used at the run location. But since Go is a static executable it must cater for the abstraction that libc provides. Hence, compiling the cross-compiler to the correct target kernel version is important. The worst thing that can happen is to use a compiler built on a recent kernel and execute the application binary it generates on an old kernel; depending on what you are doing you application may simply burn and crash. The normal way to build and distribute binaryies is to target the oldest kernel that supports what you need. May be I am wrong and Go application are compiled to use libc but this is not what I read.
No, I am not targeting kernel development using GO. That would be a bad idea IMHO.
Based on your question, I must clarify that I’m just a junior app developer passionate about Go and Linux. If there is any senior Go (especially Go compiler) developers, feel free to correct me. My purpose is to learn and to help others whenever I can.
Supporting the fact that Go is able to be licensed under permissive open-source licenses and working flawlessly in its OS & Architecture cross-compile without triggering Windows and Apple licensing legal team, I highly doubt Go underlying is bind to a type kernel directly and if say the kernel feature is unavailable, it should be reporting directly to the syscall layer in which Go binary interacts with.
Edit: I remembered reading a document stated that go compiler source codes is the “libc” equivalent in C/C++ but I misplaced it somewhere.
There are 2 independent domains so I will answer each of them separately.
IMPORTANT NOTE: I need to clarify that I’m talking about switching kernel, not switching OS+ARCH in Go compilation.
So, to summarize, and if I understand well, the internal packages will take care of supporting new feature/ABI from the kernel. Same goes for changes.
However, there is no formal way of knowing that feature X is available or not other then mocking around with uname or lsb_relase or something else equivalent. Also, what appends when you try something but the kernel does not support it? Is there a normalize failure mode/indication?
As far as I understand about Docker, you can’t trigger an OS restart directly when you swap a particular kernel inside a container. Per my understanding of LXC, it only contains app and user level applications. You might need OS level containment (virtualizing the OS itself) for seamless experiences. See below:
As far as experience tells to-date, I stick to yes but again, not directly interacting with the kernel.
Your calls in term of user experience design. Normal practice is having the app “initialize” to check every critical requirements before “running”.
For me, if it’s a single and critical purpose, I will fail it critically that grabs attention from dev-ops folks, then set the system to minimal resources consumption without having the app running. Similarly, there is no point running USB implementations if the hardware is not offering USB at all.