We all know that Go handles memory well with it’s garbage collection but would it be useful to pre-define a type (say int8) if we know the variable will never hold more than the pre-defined type?
If we have a counter that will never pass 10 should we care that a system defaults to int64 when an int8 is enough to hold the data?
It depends on the implementation of the compiler.
i don’t know how the current official Go compiler handles it, but it is possible to pack 8
uint8's into a 64-bit word for the purpose of reducing memory consumption.
A tradeoff is that it may take more time to unpack bytes when the variable is accessed.
If the current Go compiler doesn’t do that, then a later version, or a different Go compiler, may. So you can write code using byte-sized integers with the idea that you are doing what you can to conserve memory.
Also, as far as the official specification for Go is concerned, there is no guarantee that the compiler will pack small integers or boolean values. The language specification is concerned with syntax, grammar, and semantics of the language, not the exact way it is implemented.
I heard an explanation about this who says that even you need a small data type is more eficient to use a CPU native type because of memory alignment. In other words is faster to work with aligned int64 instead int32 or lower.
Thanks for the insights @jayts and @geosoft1
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.