If your background includes only high-level interpreted languages, you may need to become more familiar with how actual hardware functions. In the hardware, memory is just a sequence of storage locations. In compiled languages like C or Go, arrays are just a humanized way to refer to a section of memory. Memory holds binary data, and the memory locations are accessed by addresses, which is what the C and Go notions of pointers is about.
Consider this photo of a Micron MT4C1024 1-Megabit memory chip, from about 1989:
Full resolution: https://upload.wikimedia.org/wikipedia/commons/9/9b/MT4C1024-HD.jpg
(It’s cool because you can actually see each bit.)
You can see that in hardware, memory is a huge physical array of bits. (Or course, huge here means by numbers not physical size.)
The way the chip works is that when a binary value is placed on its address pins, it returns the bit at that address. The address is split up (or you could say “decoded”) to select a particular row and column out of the entire array of bits, and at that row and column is one bit of storage. This particular example is bit-addressable. The chip returns just one bit, but by using 8 of them in parallel, you can store 8-bit bytes. Or 64 in parallel can store a 64-bit word (like an int64, float64, or pointer). (It’s also possible to store 8 or more parallel bits in the same chip, but this particular product was not designed that way. The reason you see rectangular segments is for electrical reasons, not to organize the memory into bytes. It’s to distribute power, ground, and signals to smaller areas, which results in faster operation.)
At a higher level in the design of the hardware, memory is handled as a consecutive block of bytes, with each byte addressed by consecutive integer numbers called addresses.
To efficiently store an array of equal-sized data elements in hardware memory, it’s a simple matter of using consecutive areas of physical memory. For example, if you had an array of 64-bit integers (which are 8 bytes each), then store it with the integers next to each other, 8 bytes apart. No space is wasted, and the computer’s hardware can access the next element in the array by adding 8 to the address of the previous one. Or to get the Nth element, take the address of the first element, and add to it N*8 to get the address of the Nth element. These calculations are simple and are done by the CPU while it’s executing its low-level machine code instructions. So having arrays implemented like this in a language is “natural” for the (artificial) hardware.
Resizing or copying arrays is not really efficient simply because it takes many machine code instructions, and therefore many clock cycles, to copy all of the bytes. So that’s one good reason why slices are a good way to process array-like data in Go.
If you continue to have difficulty with slices, I suggest you take a look at William Kennedy’s book Go in Action (https://www.manning.com/books/go-in-action), or his video course Ultimate Go (2nd ed.) (https://www.oreilly.com/library/view/ultimate-go-programming/9780135261651/) . In the video course, he is very good at explaining that Go is all about working directly with hardware, which he calls “mechanical sympathy”.