Custom collection using structs

I’d appreciate some assistance in trying to understand how I can leverage more powerful and flexible collections.

An example:
I’m making a trivial app which goes off and collects around a thousand results total from a variety of web services. Each result contains up to eight fields, with about three of them used in my analysis, but I might need to work with up to eight fields theoretically.

I could push al the data to Elastic or MongoDB, but because it’s quite simple I thought it would be fun to work with the data in realtime. However none of the obvious data structures (slice or map) seem like an ideal fit for allowing me to search up to 1k records, do some arithmetic on a couple of fields, then return the result.

Any tips for how to work with this data in an efficient manner?


For 1000 records, if you don’t need to store them, just keep them in memory and define some functions which work on a slice, you could for example use the sort pkg to sort or search the slice. If you need very fast access you can use a map on a particular key as well or instead of a slice, but it’s unlikely this would be a huge concern.

If you need to persist them, I’d use a relational db like Postgres that you can query with sql, as the querying is typically more sophisticated, and if you need joins later it’s easy to add them. If you understand the tradeoffs and know mongodb say better, you could pursue that but if you’re not familiar with either, I’d learn sql first. Something like Postgres is going to be more than fast enough up to millions of records.

However I’d just try the simplest thing first (in memory), and then if you decide you need to store results choose the easiest path for you (which will probably come down to which tech you have available/are familiar with). With 1-10000 records it doesn’t really matter what you use.

1 Like


Thank you very much for the detailed reply, it helps immensely.

You describe, exactly, the technique which I thought would work well - but don’t quite understand. The notion of just keeping all the data in memory and using some functions to do what I need. This is where my lack of knowledge/experience of the Go data structures is holding me back. I understand slices and maps, but what I don’t understand is how I might use those collections to hold each record.

Should I try to make a slice of structs, perhaps where each member of the slice is a ‘whole’ record with fields like ‘name’, ‘score’ etc…

Maybe a map where the key is a the competitor name and the value is a struct (or pointer to a struct) which contains everything else needed in the analysis? Just trying to get a handle on how a seasoned Go dev would do it in the most 'Go’ey way, if that makes sense. I much prefer to leverage the inherit strengths of any given language or tool.

Apologies for asking what I suppose is more of a design pattern question than strictly a Go code question. I have no issues with writing the Go here, its more about choosing a design for the collection structure.

Many thanks.

Yes, sure, just have a struct for your record as you suggest:

type Record struct {
  Name string
  Score int 

and keep them in a var records []*Record or []Record, I wouldn’t bother with anything else like a map unless you find it is too slow, it is unlikely to be (io will be your bottleneck here probably). I tend to use pointers so they can be modified in place. You could use a map[key]Record to get faster access but probably don’t need to, and it sounds like you want to filter and sort them, in which case use a slice. Sort and Search in that sort pkg linked.

Be aware if you do use goroutines to access the data you must protect access to data with a mutex, but you probably don’t need to (for example it would not speed up most simple sort/search operations on such a small dataset).

1 Like

Ideal. Thanks very much for the additional information. A slice of structs it is…

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.