Inconsistent behaviour when converting float to uint32 on ARM


I’ve just spent a couple of hours chasing a weird bug in a larger codebase.

It all boils down to the following example:

package main

import "fmt"

func main() {
	var f float32
	f = -1.0

	fmt.Printf("float32 to uint32: %v\n", uint32(f))
	fmt.Printf("float32 to uint64: %v\n", uint64(f))
	fmt.Printf("float32 to int32: %v\n", int32(f))
	fmt.Printf("float32 to int64: %v\n", int64(f))

	fmt.Printf("float32 to int32 to uint32: %v\n", uint32(int32(f)))
	fmt.Printf("float32 to int64 to uint64: %v\n", uint64(int64(f)))

If I run it on my amd64 workstation the output looks pretty much as expected:

float32 to uint32: 4294967295
float32 to uint64: 18446744073709551615
float32 to int32: -1
float32 to int64: -1
float32 to int32 to uint32: 4294967295
float32 to int64 to uint64: 18446744073709551615

If I cross compile the same code using GOOS=linux GOARCH=arm GOARM=7 go build -o f32test main.go and run the binary on a raspberry pi I get:

float32 to uint32: 0
float32 to uint64: 18446744073709551615
float32 to int32: -1
float32 to int64: -1
float32 to int32 to uint32: 4294967295
float32 to int64 to uint64: 18446744073709551615

Can anyone explain why the first conversion is zero?

The section on numeric types in the language spec states, that float32 and int32, and uint32 are predeclared architecture-independent numeric types, so I’d expect their behavior to be the same across different architectures.
Also the section on conversions does not mention any undefined or architecture specific behaviour.

To make things even more confusing uint32(int32(f))) is consistent across the different architectures.
Looking a the spec I can’t see a reason why this two step conversion should return something different than simply uint32(f) (at least for this test case, haven’t looked into NaN or ±Inf).

So is this behaviour expected and documented somewhere or should I open a compiler bug?

1 Like

For anyone who finds this many moons later:
I’ve opened a bug uint32(float32(-1)) returns 0 when crosscompiled for arm · Issue #57837 · golang/go · GitHub

This is an interesting question but I didn’t answer because after using Compiler Explorer to attempt to investigate what’s happening, I still couldn’t figure it out. For posterity, randall77 replied:

From the spec:

In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent.

So I don’t think this is a bug. It’s just how out of range float->int conversions work.

Similar to #56023

So it seems different architectures handle this differently.

1 Like

You’re converting an invalid numeric value to uintN type, where uintN values are always positive from numerical perspective. It is already not making sense from the get-go.

It you’re looking for bitwise manipulation, you need data type conversion function like S32_IEEE754_FloatToBits, at least for IEEE754 definitions.

NO. It is down to the CPU processor datasheet and the OS designer to handle the invalid state. Default behaviour is usually crash and burn.

For Pi case, I assume they set it to 0 as mathematically speaking, scoping value range of [0, ∞) for uintN type shall set all negative values to be hard limited to 0 as minimum.

For amd64 case, AFAIK, bit-wise manipulations is fulfilled so you’ll see the max numbers of uintN (in your case, uint32 is 4294967295).

This makes sense because Pi is designed for education purposes by default so mathematical context has higher priority.

You’re going to get some weird outcome that nobody care either.

Floating points are not precise numbers but approximations in nature. This also means they have their dedicated methods to query their states and values. In another word, you cannot simply convert their value primitively (e.g. floatXintY) without making a lot of assumptions beforehand (e.g. never hits NaN, always positive, never go out of bound / infinity, value distortion, etc).

When I say validation, you need to call dedicated pre-check functions (math package for standard library) like NaN to clear those assumptions before operating on it. (Hence, this is why nobody cares because all assumptions are handled explicitly).

1 Like

Normally, I’d totally ignore further replies to this thread, the answer by randall77 over on github is exactly what I was looking for. Thanks to @skillian for posting it here btw. I was busy and forgot.

But this last reply has so many… interesting misconceptions about what is going on here,
that I’d like to correct some of them for the benefit of anyone who stumbles upon this topic in the future.

So first of invalid numeric value … last time I checked IEEE Std 754-2019 -1 was a perfectly fine value.
Also the program is a syntactically valid go program (it compiles fine), so it has to have some behaviour that adheres to the language specification. The behaviour might be an undefined form of failure, but the again the spec should say so.

The function you linked to, S32_IEEE754_FloatToBits, is literally the same as math.Float32bits, so no need to use an external library there.
Both of those functions will return the bits of the IEEE754 binary representation (binary is important here if we want to be absolutely correct, as the standard also specifies additional representations that could be used instead).

However my attempted to do a conversion as per the section Conversions between numeric types of the language specification.
That section states:
2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero).
Clearly that is not what your S32_IEEE754_FloatToBits does, unless 1109917696 is the nearest closest integer to 42.0.

Let’s just assume by CPU processor datasheet you meant the specification for the instruction set.
Since the target I specified has a FPU and we don’t use softfloat, the OS has actually nothing to do with it.
Now if we look into the ARM Compiler toolchain Assembler Reference Version 5.03 the obvious instruction to use for this conversion is vcvt.
This however is choice made by the designer of the go runtime (not by the CPU or the OS), the could just as well implement this part is software.
Also I did not check the disassembly to see what is actually being used, but it seems like a reasonable enough guess.
So if we check the documentation for it carefully we can verify that there is a signed and unsigned variant of that instruction.
Further it specifies that you can change the rounding behaviour of that that instruction and the default is Otherwise, the operation rounds towards zero..
Round towards zero in turn is a well defined operation in the IEE754 standard.
For vcvt this means the conversion saturates that target type.
Using the f32 to u32 variant, any floating point value less than 0 will become 0 and anything larger than MaxUint32 will become MaxUint32.
This behavior is exploited in other languages such as Rust to provide safe saturating conversions.
However not all architectures have similar instructions, which can lead to a bit of a challenge for people working on compilers if they chose to saturation to handle “overflows” in this conversion.
Hence the go runtime opted to not handle those special cases and declare them undefined behaviour,
which is totally fine by me as long as it is explained as such in the documentation.
This also explains the different behaviour on x86 where there is no direct equivalent to vcvt.
Also since I’m running the CPU in 32bit mode (GOARCH=arm not GOARCH=arm64), I’d assume some extra instructions have be generated to handle the 64bit case, which might explain the different result for the 64bit conversions.
(The irony that this basically an answer to the “why does that happen” part of my initial question is not entirely lost on me, actually I only needed to know whether the behaviour was in spec.)

I somehow have a hard time believing the RaspberryPI foundation managed to convince Broadcom to produce a custom FPU for them to make the PI better for education.

At least the person who wrote the section of the language spec had to care enough to specify the behaviour for that case. Otherwise they wouldn’t have bothered to add the part that randall77 quoted in his response over at github.

Just to clarify some contexts really CLEAR:

  1. I DID NOT state that IEEE754 float values cannot accept -1 value in any contexts; I’m stating that the action of pushing -1 (or -1.0) into a uintN data type which has a minimum hard lower limit of 0 already makes no mathematical sense (invalid for the latter). Hence, it can be safely assumed you’re investigating on other aspects like bitwise operations context; not mathematical ones.
  2. Just in case there’s another misunderstanding: uintN can accept values outside its mathematical limits (as in negative values, treated as bit masking) but mainly for bit-wise operations (pin switching and etc). In that context, it makes zero sense of its mathematical numeric representations.
  3. In lieu with (1), that’s why the introduction of S32_IEEE754_FloatToBits or math.Float32bits, and it also assume you’re also investigating the low-level side of floating points and you already have strong understanding of floating points. Those functions are not meant for you to perform conversions but allows you to further investigate via its binary values.
  4. I didn’t introduce you any floating point conversion functions because I don’t have your context of using the value, like:
    4.1. How do you want your Mantissa be (accuracy, resolution, etc)?
    4.2. Precision?
    4.3. Accuracy?
    4.4. What is your acceptable error?
    4.5. What is your acceptable range?

I assumed you already know about the floating point by picking a value from defined and controlled boundaries while anything else are chaotic and error from mathematical perspective (This is aligned with the spec).

Hence, for the -1 case above, since the mathematical standpoint already makes no sense, any outcomes (be it 0 or max32 or 0xDEAD0000) can be safely rejected. To do this programmatically, it is why I mention about the defensive guarding before conversion.

I’m speaking of the CPU/SoC hardware datasheet, not specifically on instruction set section (otherwise I would call it out). Definitely an assumption.

I’m seeking the I/O section (specifically hardware I/O behavioural). That should give you a strong evidence of why it is 0.

Sorry, this part is my bad. Wrong choice of word. Can confirm.

I meant by entire software side of the computing system (hardware + driver + interface + app); not the software operating system like rasbian (hardware + driver + interface) that we know of.

A board that is:

  1. specifically designed for education in the first place targeting age as low as 5 (dating 2023);
  2. sold worldwide;
  3. later heavily being used in the tinkering market;
  4. using a very customizable SoC;
  5. a certified non-profit organization.

Should be fine. Note that this is an hypothesis and assumption. Again, consult (hardware) datasheet to confirm it if you have one.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.