Regarding about your relation where -(MIN + 1) == MAX
, you’re confused with with data type in binary nature. You can’t simply say a 129 (base10)
is equal to -1 (base10)
by just looking at the binary data, which in fact, they’re the same binary value: 0b10000001
. To really look into this, you have to understand why do we still use bitwise operations in a very powerful computer today.
The Origin
The most native form of representation is always unsigned
number, where you can say 0b1000001
is 129 (base10)
or to me, switching on bit-8 and bit-1.
The reason is because bit-wise operation is heavily related to hardware interaction itself. To put it bluntly, each bit is mapped into the “switches” on hardware. Imagine each bit is like your house light switches, switching bit-8 and bit-1 to 1 is equivalent to pressing your 8th light switch and 1st light switch on.
Base-2 & Other Numbering System
In computer science, 8-bits “data” can also means number of 0
(0b00000000) to 255
(0b11111111) if we relate it mathematically. As you understand, 0b10000001 is base 2 while our number system 129 is base 10. By relating the bitwise “data”, you can easily convert base 2
number to any base numbers like base6 (hex), base8 (octet), and base10 (decimal).
Hence, computer is used for calculations, lots of lots of calculations.
The Signed Number?
As we all know, number system has negative numbers but the bitwise data does not support signed flag. The maximum binary value is 255 (base10)
which is 0b11111111
. After a long research, scientists decided to say “hey, let’s use the most significant bit (in our example, it is 0b10000000
) as the sign flag”. That also means the effective numbers an 8-bit can represent is now left with 7-bits, giving the signed number a range:
0b11111111 == -127
...
0b10000011 == -3
0b10000010 == -2
0b10000001 == -1
0b10000000 == -0
0b00000000 == 0
0b00000001 == 1
0b00000010 == 2
0b00000011 == 3
...
0b01111111 == 127
compared to previous switches unsigned
, which is:
0b00000000 == 0
0b00000001 == 1
0b00000010 == 2
0b00000011 == 3
...
0b11111111 == 255
Yet, we still need to maintain the original unsigned
nature in order to control the computer (light switches does not understand math). That’s where scientists decided to define data types, unsigned
and signed
. The one you asked about uint8
is known as unsigned integer, 8-bit maximum
which means 0b11111111
is 255
, not -127
. The signed version is int8
.
This is why the early C
language and Go
language are particularly strict with data types. You need to know your data nature before computing the data. Otherwise, 0b11111111
could means anything.
Changing Bits?
With the data definition in place, we need to operate the numbers. So, we use logic gates (hardware) to runs the operations. There are AND, OR, XOR, XAND, and etc. to flip the hardware switches and it is exactly the same interpretation in software as well. The given Wikipedia explained the operations clearly so I will skip explaining each operations.
At hardware and unsigned
data nature, back to our 0b10000001
example, to turn the bit-7 switch on, we OR
the data with out intended value.
0b10000001
OR 0b01000000
-----------------
0b11000001
-----------------
If we want to check the status of the bit-7 switch, we AND
the data with the intended value.
0b11000001
AND 0b01000000
-----------------
0b01000000
-----------------
You get the idea.
Can I operate with NOT?
Say, now you want to only turn on all but not bit-2 switch only, there are 2-ways to do it:
- You OR the value with
0b11111101
- You OR the complement value of bit-2 switch
0b10000001
OR (NOT 0b00000010)
-------------------
0b10000001
OR 0b11111101 (Simplified NOT 0b00000010)
-----------------
0b11111101
-----------------
NOT or compliment simply means “the other side of values”. In bitwise, it means 1 if 0, 0 if 1. It makes sense since 0 is a compliment of 1 to be “binary” fulfilled (0 and 1) and vice versa.
With NOT, you can express the operation clearer, like “turn on all but not bit-2 switch” instead of “turn on bit 1,3,4,5,6,7,8”.
Math Operations Problem (2-Compliment)
Now, if you follow the current signed number system, you will bump into mathematical operations problem. Take example:
# decimal
15 + (-5)
= 15 - 5
= 10
# binary operations
0b00001111 + 0b10000101
= 0b10010100
= -20 (base 10) # so wrong
So to overcome it, we need to make use of the overflow as @NobbZ explained to represent negative numbers instead. Overflow means: 0b11111111
+ 0b00000001
= 0b00000000
. There’s where the 2 compliments comes in.
To interpret 1 in both signs
+1 == 0b00000001
For -1, you do 2 "stages" compliment by NOT-ing the positive value then add 1:
NOT 0b00000001 (1st stage)
--------------
0b11111110
+ 0b00000001 (2nd stage)
--------------
0b11111111 (represent -1)
--------------
This way, when we do math operations in binary, we can have an agreeable mathematical operations.
# decimal
1 + (-1)
= 1 - 1
= 0
# binary
0b00000001 + 0b11111111
= (1) 0b00000000
= 0b00000000
= 0 (base 10)
Hence, if we check back out original example again with 2-compliment representations:
# decimal
15 + (-5)
= 15 - 5
= 10
# binary operations
0b00001111 + 0b11111011
= (1) 0b00001010
= 10 (base 10) # now it is working fine
Therefore, to make sure we can have a smooth math operations while indicating signed numbers, we 2-complement all negative numbers instead. This officially changes to the signed number representations becomes:
-127 | 0b11111111 --> (NOT 0b01111111) + 1 = 0b10000000 + 1 = 0b10000001
...
-3 | 0b10000011 --> (NOT 0b00000011) + 1 = 0b11111100 + 1 = 0b11111101
-2 | 0b10000010 --> (NOT 0b00000010) + 1 = 0b11111101 + 1 = 0b11111110
-1 | 0b10000011 --> (NOT 0b00000001) + 1 = 0b11111110 + 1 = 0b11111111
-128 | 0b10000000 --> (NOT 0b10000000) + 1 = 0b01111111 + 1 = 0b10000000
0 | 0b00000000
1 | 0b00000001
2 | 0b00000010
3 | 0b00000011
...
127 | 0b01111111
That’s why in signed integer, you get extra 1 element [-128, 127] on the negative side.
Scaling The Size
Now that we understand how bitwise operations work, let’s scale it from 8-bit to 16-bit, this gives us the 8086 microprocessor, then let’s scale it to 32-bit, gives us ia32 processor, and to 64-bit, amd64 processor.
At each scaling, with all the bit switched on, the magnitude of data getting bigger. You get:
- 16-bit has 65535 (
uint16
), and [-32768, 32767] (int16
) - 32-bit has 4294967295 (
uint32
), and [-2147483648, 2147483647] (int32
)
…
Tl; DR: each time you add a bit space, the total operational numbers scaled significantly.
Translate to Code
To visualize in code, instead of writing mathematical operations like the above, our code uses different representations. Back to our NOT example, the code simply means:
switches := uint8(0b10000001)
output := switches | (^uint8(0b00000010))
fmt.Printf("0b%b\n", output)
# output: 0b11111101
Applying good coding practice, you get:
package main
import (
"fmt"
)
const (
switch1 = uint8(0b00000001)
switch2 = uint8(0b00000010)
switch8 = uint8(0b10000000)
)
func main() {
switches := switch8 | switch1
output := switches | (^switch2)
fmt.Printf("0b%b\n", output)
}
// output: 0b11111101
When Do We Use Bitwise Operations
- Interfacing with hardware (e.g. light switches remember?). This is in the field of Internet of Things.
- Highly optimized flag registers. Some of us (like me) uses bitwise operations to build a flag registers, similar to multiple boolean cases under a single data type.
- High speed data processing (e.g. real-time signal processing) where parallel bits setting is required (e.g. filter).
- Cryptography cipher development
No. 1, 3, and 4 are the main and rational reasons to use bitwise operations. 2 is rarely needed. I barely use it in Go apart from killing time with curiosity.
Hope this clear things up.
p/s: by the way, is this your module assignment?