Deserialization with reflection, good or bad?


I have this bit of code which deserialises my structs in an automatic way, i did this because i find i was having a lot of samey code and wanted to simplify it by passing any struct with my slice and auto deserialise the slice to the provided struct.

This is what my code looks like:

val := reflect.ValueOf(packet).Elem()
startr := val.NumField()
var endr int
for i := 0; i < val.NumField(); i++ {
	endr = startr + int(message[i])

	switch val.Field(i).Kind() {
	case reflect.String:
	case reflect.Uint32:
		log.Fatal("Unknown type in Deserializer, switch defaulted!")
	startr = endr

Is there any downsides to doing this for server side code? I’m fairly new to networking and although this was quite a neat way to simplify my deserialisation, as in life, theres bound to be a catch to this that i am not aware of, so thought i would ask if there are any potential problems with this approach ?


Your approach is sound. In fact, serialization is basically the only sane use case for reflection.

The Go standard library also use reflection for serialization: for example in JSON[1]


I would be pretty careful here. A lot of the reflect functions (including Set()) can panic if the destination is not settable or the types of the source and destination are not compatible, which you definitely want to avoid if you want your code to be robust. Ensure you pre-check your destination fields with CanSet(), and wrap with a recover() function to protect against this (see

The other thing that would make me really wary of this is that it is not robust to changes in the order or number of fields in a struct, unlike the JSON serialization approach which attaches an explicit name to each field.

If you control the serialization protocol and all the peers of the network, I would consider using the standard golang JSON encoding for simplicity and robustness. If you really want a binary format use a library like (

Alternatively, if the sameyness of your original code bothers you why not try code-generation i.e. code that generates your boiler-plate serialization code (google go generate).

I don’t see how this is the case i get the number of fields each time and iterate with the for loop so their order and total field count shouldn’t affect it ? Unless i misunderstood what you meant.

I guess what I mean is this: it is robust if you are in the position where you can update all communicating peers at the same time, but if you can only update some of them to support a new field, then the ones you don’t update will break.

However, if the fieldnames are serialized along with the corresponding field values, then there is no such issue.

The use case is backwards compatibility, i.e. new versions of the structure with optional fields etc. Such backwards compatibility allows individual peers to be updated independently, which is a major bonus.