How to best handle logging something every minute keeping performance in mind?

Hi, I’m a new member here! I’m building a small linux monitoring app, where one feature is logging CPU metrics like idle, iowait etc. for individual CPU cores.

There will be daily log file rotation in place. If the user sets the monitoring interval to 1 minute, what’s the best way to log? Should i keep a file always open and continuously log into it? Or open a file, write to it, and close it every minute - which may be overhead at latter times of the day.

I want to choose the best approach. Also, I was using zerolog package early but later made my own custom logger that utilizes some buffer pool sync.Pool and other things (as AI helped me create it). Especially it helped me write code in a way I like ( log.Info("Something happened!").With(key, value) ) Though I want to switch back to zerolog or whatever that helps performance.

My question may be quite vague overall, but I’m only looking for a proper way to log something every minute making sure it won’t be a bottleneck in small linux servers at all.

Regards.

Hello there. Depending on your exact use case you can also check out slog package from std. If you are writing from the single stream of data, without goroutines and your file is not for concurrent access, I’d say the best solution is to keep it open. The only thing I can come up with from this solution is that actual write from memory into the file will be decided by the host system, if you are not using Flush manually. There are couple of libraries which already give you the ability to use rotation in your log files. I use some of them in couple of my projects and IMHO the performance was never an issue.

That’s only 1,440 log records per daily log file. I think you are overthinking the issue unless your records are huge, in which case that’s the problem, not the 1,440 records.

I kinda admit that I did not read and or understand the problem to the complete core, but I log only interesting stuff. measure everything and then only log if the difference is exciting enough.

Hello @neonix600,
For minimal overhead on small Linux servers, keep the log file open and write buffered entries every minute. Use zerolog with io.Writer backed by a buffered file stream to avoid frequent open/close cycles. Your custom logger with sync.Pool is solid—zerolog can match that efficiency with structured logging and low allocation. Login portal

Best Regards,
Julie Batson

Another consideration for logging is to “asynchonously get it out the door”.

Many use PaaS log aggregator solutions as a place to “ship logs” when they don’t realize these services also support a JSON post to an endpoint where you have centralized visibility to these types of events and you don’t have to worry about the local I/O and dealing with logs filling up, etc.

I believe if you look hard enough, services like Loggly still have free accounts. Here is example CURL with a tag for your use case:

curl -H “content-type:application/json”
-d ‘{
“app”: “linux-monitor”,
“component”: “cpu”,
“host”: “myhost1”,
“metrics”: {
“user”: 2.35,
“system”: 1.07,
“idle”: 94.85,
“iowait”: 1.73,
“nice”: 0.00,
“irq”: 0.00,
“softirq”: 0.00,
“steal”: 0.00
},
“timestamp”: “2025-07-23T14:58:00Z”
}’
http://logs-01.loggly.com/inputs//tag/hdwr-metrics/