Hello, I am using mux and godaemon but there is a little problem…
Here is my project : https://github.com/dimensi0n/flume
I updated to godaemon this morning but when I run my server in background, I make a curl on localhost:8080/auth/all and the server seems not to be running…
Homever I can’t use my -t flag because of godaemon… I don’t know why…
How do you start your app? And on which operating system are you?
Linux usually kills child processes when the parent dies. And this is exactly what demonize does. Start your program as a child process and then exit itself.
I am on ubuntu 18.04… I’ve made a go build and then ./flume
According to the godaemon README, you need to provide
It’s fixed now I switched to docker and now it runs in background and I can deploy it more easily
In most cases docker is not the correct solution. I’m pretty sure that writing a proper systemd unit or initscript (depends on what init system you use) is usually the cleaner and better solution to this problem. Also wrapping it in a proper package for the target system is usually much better than a docker service.
Docker is only really a thing if you want to deploy using kubernetes or into swarms or other kind of cloudstuff.
Yes but the problem is : "I’ll have to compile it for multiple OS and it will take me an eternity to do that…
Citation needed. What makes you say these things? I would personally prefer to deploy something using Docker than having to deal with systemd. It also prevents you polluting your filesystem with things related to running your services, thus avoiding cases where your service only works due to side effects.
There are plenty of reasons for running things on docker images, and plenty not to. In any case, I’m with Norbert in that fixing some underlying problem by isolating something that didn’t run isn’t a fix at all, it’s hiding it under the rub and the real issue may come to bite you later.
No citation available, it’s just personal experience. Docker breaks the package managers dependencies.
When you remove docker, your service stops working, without apt or yum warning you.
It pulls in a whole operating system of 100megs or more (at least this is the case for many badly written docker images) of unneeded stuff just to run a 10meg executable, which would run just as fine natively.
Last but not least, there are issues in docker that make it possible to the guest to change something in the host which would usually require root permissions.
This is so true. From a sysadmin perspective, if your stuff isn’t in the repositories available we are not going to deploy your stuff. Sometimes we have to of course, but there is nothing more annoying than something that expects to have internet access and/or uses custom specific versions that are packaged along with the product.
"But it works for me on my laptop"
The worse case of this is when the developer has dependencies that are basically the nightly builds of whatever technologies the project uses and expects us to have those in production as well. Not going to happen
That really does not cut it in a production environment where you want everything to be reproducible. And Docker is not the solution. That would just put your application within a little container I have no insight into, and would require you/us to also add our monitoring platform stuff in there, and so on. And why would this be in a container? I would only use containers for something that is completely stateless and needs to run distributed. If the container would just run “forever” then I would skip Docker entirely and just set it up on the machine directly.
Cross-compiling Go code is really easy, and also just as quick as compiling your code in the first place. If you think compiling Go code takes an eternity you are not very patient
There is also Makefile magic for this
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.