I always use plain ints unless it needs a specific amount of bits for a certain reason (e.g., wire format over the network) or I know 32 bits will not be enough when running on a 32 bit platform (database object IDs, etc.).
Many things are naturally ints (indexes, lengths, default type of integer kind constants, …) and you’ll be fighting this for no reason by hardcoding bitness. You’ll also make me wonder why you want specifically 32 bits and not the platform default, etc.
It’s sort of a habit I brought over from C programming.
What you say make sense.
So specifying int without bit size will use platform default?
I guess my real frustration came in while using the strconv package where the functions return types with bit sizes and when using int you end up with a lot of casting.
Yeah, strconv.ParseInt, being the swiss army variant of parsing ints, does indeed return an int64 as this is the “superset” of all the other int types. Otherwise it would need to provide a bunch of ParseInt{64,32,16,8} functions, now it just takes a bitness argument instead.
There is also strconv.Atoi if you don’t particularly care, or you know that the value is small enough to fit in a regular int.
The Go int is 32 bits on 32 bit platforms, 64 bits on 64 bit platforms. It’s one of the few types that change size (uint and uintptr being the other two I can think of off hand, plus the pointers of course but we are not really directly exposed to their size).
Signed. You should read the spec: https://golang.org/ref/spec. I mean this not at all in the “rtfm” way, just that it really is a short and readable spec that explains a lot of useful things. Certainly you shouldn’t avoid it in the belief that language specs are for language lawyers and otherwise unintelligible. It is very much not like the C spec in that way.