Noob is wondering why slices need to exist

as a noob to golang, i’m clueless as to why slices need to exist. I’ve used arrays and lists all my coding life and never felt the urge to make anything resembling a slice to interface with my arrays.
I can’t find a word on the internet about why go-slices need to exist.
Can anyone point me at the definitive and simplest base case example that cleanly shows slices being better in some useful way over straight array / list use.
thanks

Slices are of variable size, arrays are fixed size.

Slices have a O(1) random access, Lists have O(n).

An array with with n elements uses up n times the data size of memory. A slice the same + a pointer for the start and the current size. A List: one pointer per element!

Thats why each of it does exist, and each of it is useful in some, and not so useful in other situations.

2 Likes

>Slices are of variable size, arrays are fixed size.
I already know. Still clueless why slices are better than array references / pointers.

>Slices have a O(1) random access, Lists have O(n).
I don’t believe it. Why doesn’t every language dev in the world for every language ever invented, particularly DB languages, patch their language to work with slices, so that random access can be O(1) ?

>An array with with n elements uses up n times the data size of memory. A slice the same + a pointer for the start and the current size. A List: one pointer per element!
I’m aware golang copies arrays to functions instead of references them. But then why not just pass a pointer to the array or an array element?

I repeat, since golang’s creation in 2009, no-one has been able to demonstrate the benefit of slices over arrays / lists / pointers. If they had, the demo code would have been fanfared for years all over the internet: “look at how much memory / time / linesOfCode we’ve saved by using slices instead of arrays/lists/pointers!”

And the silence on this, a popular golang forum, confirms this.

The only sensible conclusion is Ken Thompson was just throwing an alternative way of handling arrays -slices- over his shoulder, not knowing if it would be useful, but wondering if some devs somewhere could find a way to make slices better than arrays/lists/pointers.

I consider dealing with slices in go or vec in rust much more ergonomic than dealing with arrays directly in C, where I always have to do a lot of manual book keeping when I want them dynamic, and therefore basically re-implement what is already known as slice on go or vec in rust.

I know that rusts vec or gos slices are implemented in a manor that is efficient enough for the most common usecase.

I wouldn’t trust in my own implementation, and probably would use a library because of that then, which would make it easier to deal with it, but I have to trust a third party then.

I prefer if I can just trust the stdlib/runtime for this kind of things.

1 Like

I would not say that it is a question of which one is better. Due to their different properties (arrays = fixed length & pass-by-value, slices = variable length, mapped onto arrays, almost-pass-by-reference), they simply have different use cases.

Slices can be seen as a convenience layer on top of arrays, while arrays can be useful for keeping control over memory layout and heap allocations.

Here is a quite useful difference: In a 2-dimensional slice, each inner slice can have a different length.

2 Likes

Although I wish Go provided ways for much faster (unsafe scoped) code, I’ve never seen slices as a performance issue. I thought it’s just a similar concept for memory safety as Python had it. In safe blocks slices only shrink, the start to the right, the end to the left, it’s checked at runtime and I’m not allowed to do otherwise, so it looks like a safe subset of pointer arithmetic to me. Not the definitive one, but one that is okay.

If C/C++ executables were able to stop as early and forcefully (because the language makes you declare unsafe blocks instead of giving you lots of rules you need to be aware of) on the initial mistakes, I would have one reason less to use Go. So, there’s no definitive reason in Go, but there’s no real alternative either, is there? (I don’t like it when Go babysits too much, and sometimes it does that. Unfortunately, it looks like a helpful concept to kick out unused code, but that’s not as helpful as the makers of Go might think.)

Hi, @meemoeuk, and welcome to the forum!

No, I can’t give you anything definitive. It seems to me that slices only need to exist in Go because they decided to make arrays have compile-time fixed sizes. From my perspective, Go’s slices are what I would call an array in C. In C, I might write something like this:

err_code_t getsomedata(data_t *buffer, int capacity, int *outElements);

So getsomedata gets a buffer and capacity and needs to write its data into the buffer and store the number of elements written into outElements. The API probably has to return some err_code_t value to indicate that there wasn’t enough capacity or something.

In Go, I’d write it like this:

func getsomedata(buffer []data) ([]data, error)

So just like C, I could preallocate a buffer and pass it in, then the implementation appends to it and returns it.

Like I said, I don’t have anything definitive, but the Go version is better than the C version in at least one way to me because it gets you a bit more compile-time safety. When you get a data_t* in C, the type system doesn’t tell you if that’s a pointer to a single data_t or an array. The programmer has to be familiar with the API and conventions. In Go, you use *data for a single and []data for multiple.

Go’s append function is the same thing as C’s realloc to me. In C, I would keep track of the current length and total capacity of the buffer in their own variables or maybe in a struct (of course, it depends on context), but in Go, it’s all wrapped up into a slice (which is just an array pointer and integer length and capacity fields).

It seems to me they wanted close to C-like simplicity with the built-in types with just a few features added to make things like concatenating strings, appending to sequences of data, associative arrays, etc. easier. If you want something like Python or C#'s list/List<> types, you can build that on top of a slice (or arrays, but then you have to use the unsafe package or do a lot of runtime type checking).

Hi all,

The OP is right, slices are a little bit more expensive than arrays, his intuition is correct. But, you do not need to manually allocate memory (to resize) for a slice, and this is a huge advantage, as you need to write less code.

A little bit more costly (but almost the same) and their are much easier to code with (you need to write less code when using slices than pure arrays).

I wrote a quick benchmark to illustrate the advantage of slices:

It creates a list of a million elements and then sums their value.
It does the same with slices.

% go test -bench .
goos: darwin
goarch: arm64
pkg: slicebench
BenchmarkSlice-10    	    1400	    853595 ns/op	 8003588 B/op	       1 allocs/op
BenchmarkList-10     	      49	  21411564 ns/op	16000029 B/op	 1000000 allocs/op
PASS
ok  	slicebench	3.579s

So the slice version is 25x faster.

package slicebench

import "testing"

type list struct {
	next *list
	val  int
}

func genList(n int) (li *list) {
	for i := 0; i < n; i++ {
		li = &list{
			next: li,
			val:  i,
		}
	}
	return li
}

func sumList(li *list) int {
	res := 0
	for ; li != nil; li = li.next {
		res += li.val
	}
	return res
}

func genSlice(n int) []int {
	o := make([]int, n)
	for i := range o {
		o[i] = i
	}
	return o
}

func sumSlice(s []int) int {
	res := 0
	for _, v := range s {
		res += v
	}
	return res
}

func BenchmarkSlice(b *testing.B) {
	b.ReportAllocs()
	for n := 0; n < b.N; n++ {
		slice := genSlice(1000000)
		sum := sumSlice(slice)
		if sum != 499999500000 {
			b.Fail()
		}
	}
}

func BenchmarkList(b *testing.B) {
	b.ReportAllocs()
	for n := 0; n < b.N; n++ {
		list := genList(1000000)
		sum := sumList(list)
		if sum != 499999500000 {
			b.Fail()
		}
	}
}

It is worth doing a disassembly to see how much simpler the slice code is.
Creating the slice involves only a single allocation.
Accessing the slice code is very cache-friendly - the data has a high locality of reference,
and is accessed in a predictable pattern which allows the caches to pre-fetch the data.

But in the list version, creating the list requires an allocation for each element.
And traversing the list requires a pointer de-reference and possibly a cache miss at each element.
In this benchmark, the list is generated on a virgin heap, in the same order it is accessed.
In the context of a real program, the addresses of allocated elements will be effectively randomised, so you will actually get much worse performance for the list, and an even bigger advantage for the slice.