Built in operators and types

I’m curious whether it would make sense to build compile time methods for binary and unary operators. I’m not a deep language person, but i have types that have equivalents to “add”, “multiply”, etc.

I’m thinking software MIGHT read cleaner if it were possible to bind types to operators. I have no idea how hard this might be.

For instance, I have a three dimensioned coordinates, say:
type c struct {x float64, y float64, z float64}

It would be great to be able to say:
var c1, c2 c
c3 := c1+ c2

var c1 c
var f float64
c1 *= f

As another example.

Or, Nirvana would be to be able to add operators, such as “dot product” or “cross product”.

The advantage would be to use all the operator precedence, etc.

Note, I haven’t been able to find any discussion on this, but that doesn’t mean it does not exists, and this is mostly wondering why.

I’m considering alternatives, and while perhaps not as clunky as using library functions etc., it’s still clunky.

Try go overload operator - Google Search

You’re right, operations on vectors, for example, is really difficult to read in Go.
On the other hand, overloading operators in other languages is often abused and leads to obtuse code (cf. haskell code).
It’s a tradeoff.


Hi @Dante_Castagnoli,

I see your point, but the Go team has intentionally opted against operator overloading:

Why does Go not support overloading of methods and operators?

Method dispatch is simplified if it doesn’t need to do type matching as well. Experience with other languages told us that having a variety of methods with the same name but different signatures was occasionally useful but that it could also be confusing and fragile in practice. Matching only by name and requiring consistency in the types was a major simplifying decision in Go’s type system.

Regarding operator overloading, it seems more a convenience than an absolute requirement. Again, things are simpler without it.

Also, this excerpt from a great article of a great (but archived) blog shows how operator overloading makes things more difficult, rather than easier: (It’s about C++ vs C, but the idea holds for any language)

When you see the code

i = j * 5;

… in C you know, at least, that j is being multiplied by five and the results stored in i.

But if you see that same snippet of code in C++, you don’t know anything. Nothing. The only way to know what’s really happening in C++ is to find out what types i and j are, something which might be declared somewhere altogether else. That’s because j might be of a type that has operator* overloaded and it does something terribly witty when you try to multiply it. And i might be of a type that has operator= overloaded, and the types might not be compatible so an automatic type coercion function might end up being called. And the only way to find out is not only to check the type of the variables, but to find the code that implements that type, and God help you if there’s inheritance somewhere, because now you have to traipse all the way up the class hierarchy all by yourself trying to find where that code really is, and if there’s polymorphism somewhere, you’re really in trouble because it’s not enough to know what type i and j are declared, you have to know what type they are right now, which might involve inspecting an arbitrary amount of code and you can never really be sure if you’ve looked everywhere thanks to the halting problem (phew!).

When you see i=j*5 in C++ you are really on your own, bubby, and that, in my mind, reduces the ability to detect possible problems just by looking at code.


OK, how about a set of operators that indicate they are not operating on standard types, but use the same precedence, etc.? And an additional set in which precedence, etc., may be set. Note too, that, while I appreciate the goal of removing auto-conversion, that too bloats the language, and forces one to use a consistent set of types even when they are perhaps inappropriate. Note, I’ve never had an issue with type conversion, though that’s my individual experience, and may be more due to the kinds of programs I write.

I also understand that type conversion was evaluated in prior to golang’s strict rules, and was considered a source of errors. Does specific conversion ameliorate these errors? I must wonder, and also I must wonder if the cure is worse than the disease.

I’m simply looking at my code, and not liking the way it works out so clumsy.

As an example:

                    sRads := lib.DegreesToRadians(spin)
                    // p+r(cos t)v1+r(sin t)v2;t real
                    scos, ssin := math.Cos(sRads), math.Sin(sRads)
                    v1Mod := lib.VMult(&vec3Perp, mag * scos)
                    v2Mod := lib.VMult(&vec3Cross, mag * ssin)
                    expect := lib.VAddV(&v1Mod, &v2Mod)
                    expect = lib.VAddV(&expect, &vecs[0])

                    if !lib.VEqual(&vecs[1], &expect) {
                            t.Fatalf("expected \"X\" rotation %s, recevied %s\n", vecs[1].String(), expect.String())

Note, I’m happy to see thought behind this. Perhaps that’s the correct solution, only I do not like the constraints as they lead to code I would rather have cleaner and more readable.

If there are better ways, I’m happy to hear them!

Thanks for the pointer! Much the same, I think, as what I’m doing.

I suppose I’m disappointed in the language bloat as it lacks typecasting and auto-conversion. Though for “C” programmers, one of my favorite interview questions is, if in file “A” you declare global char []x, and in another extern char *x, why is that broken. It’s odd to me so few people can answer! I think I’ve had one in decades.

In general I write software with concurency in up to tens of thousands of user inputs on a single instance, and so my view about how to write that software intentionally limits input exceptions, as an error in exception processing can be fatal. Also, while I love Go’s unit test system, it is nowhere near sufficient to be able to fully test programs in this class, unless they are algorithmically simple.

I suppose that is why I have never had a casting error that got into the field, let alone QA, for whatever that is worth.

My view is that the strict casting of “Go” bloats the language, and makes it more complex. On the other hand, perhaps it sorts out bugs in programs that take broader ranges of input. In my experience, most of those kinds of programs are targetting to a single end-user.

So, I suppose, in general, I don’t agree with the critique. You always have to know the (base) types.

I do think base and other types are the same problem and suspect there are some modifications to try to work around the strictness, but those too are painful.

I suppose I’m not so taken by the concept that many errors occur due to type checking without also demonstrating the cure is less worse than the disease.

Go clearly isn’t a language made for utmost writing convenience. The language designers decided to keep the type system simple, with the least amount of surprises.

Yes, this makes the code more verbose, but also essentially non-magic. The code clearly tells what it does, even to the superficial reader. No guesswork is required about what each of the five * operators in the statement a := b * c * d * e * f exactly do (and do they all the same?) or why a function that expects type A happily accepts type B. (What particular auto-conversion rule matches this scenario?) No doubts exist about whether the == operator is the “basic” equality operator, or whether it does a deep equal comparison. (How deep? Does it follow pointers? Does it detect loops in graph structures, or would it go into an infinite loop?).

I rather have a verbose language than a language that has too much magic under the hood.


I wanted to thank you for your time and consideration in expressing these trade-offs with a bit more depth than I’ve read elsewhere.

The argument that a * b * c. . . requires someone to look up the base types isn’t compelling to me. One still does not know the base type, for instance, of a, b, c in float64(a) * float64(b) * float64(c).

No autoconversion requires creating explicit conversion rules instead of using a base version. Frankly, it seems as if it kicks off a uniform version as too hard to make with the right properties, and so forces the programmer to create their own rules.

In this view, it seems to me the value of no standard conversion is to force the programmer to realize conversions are taking place. I would further argue that should error arise due to oversight/sloppy programming, these errors should be caught by unit tests.

To me the utility of pointing out to the programmer autocasting is occurring has utility only should unit tests not catch errors.

I would think of this question as follows. Let escape be the set of functions for which unit test does not catch auto-conversion error, but does catch all other significant coding/design error.

If the cost of escape is smaller than bloat/readability/etc. required to accommodate no autocast, then autoconversion is a winner. Note, accessibility of unit testing is quite good in “Go,” unit tests are essential for many applications (such as cloud applications). It also seems to me the industry is trending towards less QA, and placing more responsibility for proper software function on developers and designers. This in turn increases the need for thorough unit testing.

I doubt this way of looking at the tradeoffs could easily be evaluated, but that’s the way I would think of it.

Note, I LOVE Go. I’ve been writing in “Go” for about 7 years now, and it has enormous merits over “C” and “Python” for a broad range of applications. Decades ago I worked with about four members of the “Green Team,” and they railed against Java-bloat that it became. Hopefully, that will not happen with “Go.”

I’ve had colleagues express that adding generics to Go was an unfortunate step in that direction. I think that generics allow eliminating a lot of useless code redundancy, but I do see where they can add complexity and can be abused like operator overloading. More trade offs.


I have evaluated generics over perhaps a day or so, and it seemed to me it was part of the solution to “No Autocast.” I wrote a bit of experimental code to use them, and perhaps I didn’t spend enough time, but they felt clumsy.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.