# Numeric constants https://tour.golang.org/basics/16

https://tour.golang.org/basics/16

I don’t understand this at all.

1 Like

What exactly is confused you?
There is several types for numeric representation. One should take type that most fits for the task.

2 Likes

Hi Cherolyn,

When you just make a statement, it makes me want to reply, “That’s nice.”

Divide and conquer. Take it a piece at a time and see if you can understand the pieces.

Do you understand what the `<<` operator does?

4 Likes

No I don’t understand what that operator does

1 Like

Well there is a comment about that in code above:

``````// Create a huge number by shifting a 1 bit left 100 places.
// In other words, the binary number that is 1 followed by 100 zeroes.
Big = 1 << 100
``````

If you want go deeper here is good article, but I think you can forget about the operators now.

Bit Hacking with Go

3 Likes

Numeric constants are high-precision values …what are high precision values? I never heard this term before

I tried this: Try printing `needInt(Big)` too. , and got
./prog.go:20:30: syntax error: unexpected newline, expecting comma or )
./prog.go:21:28: syntax error: unexpected ) at end of statement

// Create a huge number by shifting a 1 bit left 100 places. How do I do this?
Is this what’s happening with …Big = 1 << 100?

// Shift it right again 99 places, so we end up with 1<<1, or 2.
Small = Big >> 99
I think I’m beginning to get it.

func needInt(x int) int { return x10 + 1 … (x int) is this a parameter? … does x10 mean x times 10?

That’s a start

1 Like

It should understand literally.

Go has two floating point types - float32 and float64 .
float32 occupies 32 bits in memory and stores values in single-precision floating point format.
float64 occupies 64 bits in memory and stores values in double-precision floating point format.

So high-precision means float64.

Right.

2 Likes

In many programming languages, constants (e.g. literal numbers in the code) have an associated data type. In Go, constants are what I’ll call “Schrödinger-esque” in that they don’t have a type until they are “observed.” Observed here means read or written at run time.

Here’s an example I put together with strings: Go Playground - The Go Programming Language

In the example from the tour of Go, the constant, `Big`, is defined as the value `1 << 100`. That expression means to take a 1 and shift its conceptual bit(s) to the left 100 positions. Most computers today are 64-bit, so a bit shifted 100 positions would “fall” right over the end and you’d end up with some invalid result like 0 or -1. If you were to then take that result and shift it back to the right 99 positions such as in the example of `Small`’s definition, you’d end up with some other invalid and/or unexpected value.

My understanding of the purpose of the example is to show that when you’re working with constants, they don’t have a data type, so you can define huge constants and then build up other constants with expressions on previous ones (like how `Small` is defined as `Big >> 99`). It’s only when you try to use these constants in code that the datatype is decided based on the type of the variable the constant is being assigned to.

4 Likes

It may help to explain that bitwise operators, just like addition and subtraction, are handled in CPUs directly in hardware, using switches and memory cells that hold a bit each.

A CPU has a datapath that is a certain number of bits wide, which in the case of a 64-bit CPU (like Intel’s Core i3 - i9 processors) is 64 bits wide. That width imposes a physical limitation of the digital circuitry; if a mathematical operation produces a result outside of what the physical circuit can hold, the circuit produces an incorrect result (if any).

Let’s look at bit shifting with an 8-bit architecture, to make it simpler.

``````00000001
``````

The above is the number 1 represented as an 8-bit binary number. Suppose the CPU has that number in one of its registers (which is somewhat like a local variable in hardware), and executes an instruction that tells it to shift all of the bits left by one. In Go, this may be written as

x = x << 1

and the compiler might produce assembly code that looks something like

LSL r1, 1

(for "Logical Shift Left register 1 by 1 bit). This is a single instruction for the CPU. So the CPU executes that instruction and then here is what is in the register:

``````00000010
``````

Keep in mind that it isn’t just the `1` that is shifted left; the `0`s are shifted, too. The whole 8-bit word is shifted, with a `0` being placed in the rightmost position, and the leftmost position, whatever it was, is discarded.

Shifting left 6 more places produces

``````10000000
``````

and shifting left one more place left produces

``````00000000
``````

Oops. 8 bits can’t hold the `1` anymore. It’s not a correct result mathematically, but it’s correct behavior for the computer because it’s designed that way. As a programmer, you need to understand how the computer works. It’s your responsibility, not the computer’s “fault”.

So back to Go’s “high precision” constants.

In Go, constants are handled by the compiler, before machine code is generated at all! So the hardware limitation of the CPU is not a factor. When you write `1 << 100`, the `1` is not lost. The number is retained in mathematically correct form by the compiler as a 1 with 100 zeros after it. And then it can be shifted right 99 places to result in 2, (`10` in binary).

4 Likes

Thanks for such detail! I will study again later. My computer is about to die!

1 Like

So high-precision means float64.

cherilexvold1974:

So does that mean that it takes 64 bits to write them?

Is this what’s happening with …Big = 1 << 100?

Right.

Got it

1 Like

Bit Hacking with Go

That’s why you can not use it with `needsInt` in the example code.