Measure my cache line size?

How can I find out what is the cache line size on my computer ?

I know there are some ways to estimate it by running code, but i found no Golang examples.

Also, is there a way I can see in Linux what my cache line size is, perhaps some properties file ?

On a Linux system, you can read /proc/cpuinfo:

$ cat /proc/cpuinfo  | grep cache
cache size	: 30720 KB
cache_alignment	: 64
cache size	: 30720 KB
cache_alignment	: 64

Is this what you are looking for?

Hi @lutzhorn

I am looking for the size of one cache line. This seems like the size of the whole cache. Maybe cache_alignment is the cache line size, 64 bits ?

By cache line I understand the size of the smallest block of data loaded into memory when a cache miss occurs.

I don’t know.

But this question is not related to Go.

@JOhn_Stuart, to answer your question, this should work:

That being said, what are you doing where you need this information? I’ll admit that I’ve been working on a personal, private project where I use that value but I’d probably advise against it and suggest alternatives for any real project(s).

@skillian that is a good approach.

Look at this code, is clearly illustrates the power of cache line size:

var arr = make([]int, 64*1024*1024)

func one() {
    for i := 0; i < len(arr); i++ {
        arr[i] *= 3

func two() {
    for i := 0; i < len(arr); i += 10 {
        arr[i] *= 3

Before you read below, what is your prediction: if benchmarked how much faster will function two run compared to function one ? After all, it is doing 10 times less work, and the work is of the same nature.

I am looking for more examples of code like this. Also, I know for a fact there is some matrix C code that allows you to tell precisely what the cache line size is, not by reading a config/property but by doing some computation in chunks and increasing the chunks size - or something like that.

I am trying to come up wiht some go code that lets you tell precisely, by measuring the runtime for similar function, what is the cache line size.

Have you come up with an estimation of what the benchmarks will produce for function one and two ?

Whether it’s 32, 64, 128, or some other byte size, skipping 10 values at a shot is not an efficient use of your CPU’s cache lines or actual caches. But why do you have to know the exact size?

just curious, no practical use.

Anyways, you make a good point, skipping 10 * 8 bytes is not efficient. I am thinking at a practical problem: in a slice only 25% of elements wil be used very often, the rest rarely. What is the best place to arrange those, at the beggining of the array ?

Any examples on how to arrange my data to make use of cache line size ?

I see now. I would recommend putting a post like this in the #technical-discussion section next time. When I saw it in the #getting-help section, my goal was to get to the actual problem you were having.

Anyway, given your example of frequently used slice elements, they should be contiguous if at all possible, but not necessarily at the beginning of the array. This code:

package main

import (

func main() {
    vs := make([]int, 1024)
    vs[512] = 123

Is just as memory efficient as if the 512 index was 0. Slices have a pointer to their underlying array, so when the index is 512, there are extra instructions to calculate that offset from the slice’s data pointer, but the CPU still only makes one dereference for the store and another for the load (assuming the optimizer isn’t optimizing away that second access to vs[512]).

I started putting together a list of ideas to keep in mind when implementing cache-efficient code in Go, but it started getting really long so I stopped. Long story short, I recommend taking a look at Go’s runtime. There’re plenty of space optimizations in there. One I remember in particular is the internal/ type.

Thanks a lot for everything.

2 May 2020 Cmt, saat 19:09 tarihinde Sean Killian via Go Forum şunu yazdı:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.