String to int (not int64) using ParseInt()/Atoi()


Question about string to int conversion using package: strconv

First, strconv.Atoi() returns type int.
Second, it appears that strconv.ParseInt() never returns a type int. However, the language definition says:

“The bitSize argument specifies the integer type that the result must fit into. Bit sizes 0, 8, 16, 32, and 64 correspond to int, int8, int16, int32, and int64.”

So, if I do:

n, _ := strconv.ParseInt("42", 10, 0)

n should be an int, not int64. Right? But, it is int64.

Any ideas?



The bitSize argument specifies the integer type that the result must fit into

This does not mean that this type is actually used.

Maybe try to run the code on a 32 bit system and see what happens.

First, thank you so much for your reply.

Still, in the quote from the documentation (repeated below):

“The bitSize argument specifies the integer type that the result must fit into. Bit sizes 0, 8, 16, 32, and 64 correspond to int, int8, int16, int32, and int64.”

The second sentence implies that a bitsize of 0 corresponds to int.

What am I reading wrong? Thanks.

The bitSize argument specifies the integer type that the result must fit into

This means the int64 result can be safely casted to your integer type:

i64, _ := strconv.ParseInt("42", 10, 64)


i64, _ := strconv.ParseInt("42", 10, 32)
var i32 int32 = i64 // cast to int32


i64, _ := strconv.ParseInt("42", 10, 0)
var i int = i64 // cast to int

int is either int32 or int64 depending on your system. With the bitSize 0 you make ParseInt choose the right type for you (32 or 64).

1 Like

Perhaps I need to take this to the Language Definition forum. These do not conform to the specification. bit size 0 is supposed to return an “int” whether it is 32 or 64 bit is machine dependent, but it supposed to be an “int”.

When converting a string to round number integer, you need to consider the memory size limitation that holds the real value. Here are the limitations:

const (
    MaxInt8   = 1<<7 - 1
    MinInt8   = -1 << 7
    MaxInt16  = 1<<15 - 1
    MinInt16  = -1 << 15
    MaxInt32  = 1<<31 - 1
    MinInt32  = -1 << 31
    MaxInt64  = 1<<63 - 1
    MinInt64  = -1 << 63
    MaxUint8  = 1<<8 - 1
    MaxUint16 = 1<<16 - 1
    MaxUint32 = 1<<32 - 1
    MaxUint64 = 1<<64 - 1


The bit-size is requested because a string can express any real value that can go beyond the memory size of any integer type including int64. You need to tell ParseInt the bit-size limit you’re planning to get so that it converts it properly for you. For example:

  1. MaxUint16 (Value at 65,536) for uint8 (Max at 255), you will get 255 as a result instead of 65536.
  2. If you want to maintain 65536 real value, then you need to increase the bit size to 16-bit (uint16) to accommodate the actual size of the data.

NOTE: I’m using uintX data type here to keep the explanation simple about bit-size capacity.

To be fair, Go is very specific with data type and as of to-date, there is no “generic”. ParseInt needs to deal with multiple intX data types of different sizes: int, int8, int16, int32, and int64 a given time, for different customers (like you and me).

Another problem with int is its bit-size is based on the CPU architecture (which also means it is not consistent across CPU like 16-bit, 32-bit, and 64-bit). That’s why it always better to return a single type: int64 (the largest) in which user can convert the int64 data to the appropriate intX of your choice using the simple intX(...) conversion. That’s how Atoi works (

If the standard library is to offer int, there will be requests for all types of intX. That also means developers have to wrap ParseInt 5 times to return each intX data types respectively.

This makes no sense in term of long term maintenance just to cover these edge cases requirements for every customers.

Instead of forcing users (with and without low-level knowledge about bit-size and CPU architectures) to easily convert number to integer, a wrapper function is created for it called Atoi, which is essentially ParseInt(string, 10, 0).

To answer this question, since the code is open source, you can always check it out by reading its source codes: You can easily change the documentation from package form to source code by changing pkg in url to src.

From the source codes, 0 respects your CPU bit-size. That means if you’re on, say:

  1. Raspberry Pi 3 (32-bit), int is 32-bit.
  2. Intel i5, i7, iX (64-bit), int is 64-bit.

As for the lack of description, you can contribute the documentation updates in the source codes to Go source codes. Please do remember to read the contribution guidelines.

For personal opinion-wise, since I’m calling ParseInt function for specific conversion, I am already sensitive to the maximum data capacity. Hence, there is no reason for me to use 0 at all for compiler to guess the bit-size and always be specific (8, 16, 32, 64) about it.


Wow :smiley: This was a very long answer for a question that was already answered 12 minutes earlier;

see strconv.parseInt() never returns int



This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.