Hello,
I made a custom type
type myPrecision float32
so I can later decide what precision I want to use for these variables by simply changing this line.
But now I have to do type conversions everywhere!
Is there a simple way like a preprocesser, that just replaces every “myPrecision” by “float32” or “float64” or whatever I want?
I’d be inclined to do that in a dev environment as a refactor operation, expecting it to eventually setle down at a stable value.
Thank you.
But I want to run my code on very different machines, compile it for a Windows-Server, but also for a small Raspberry Pi. So I want to be able to scale it up or down.
In Go 1.9, type myPrecision = float32
, but I’m sort of hesitant to suggest it as this is a new feature and the community hasn’t really agreed on what’s good and and bad practice.
But I’m not sure this solves your issue. If you’re having to do type conversions today and want to avoid doing them, that means you have a float32 on the other side. If you change myPrecision to be an alias of float64 instead you would get a compile error.
Thank you, that sounds promising.
In most cases, the “myPrecs” act only among themselves. There are just a few cases where I need to do explicit type conversions to variables or calculations that need to be float64.
Most of the time it’s the math library that doesn’t understand what a “MyPrec” is.
I can easily exchange it for a math32 version.
I also want to use the myPrec in more then one package, but compiler says: packageA.myPrec != packageB.myPrec, even if both are nothing else than float32 …
The variables and functions that need to be float64 hinder me to run a search-and-replace over all the source files, unless I do that before every build-and-run-test (which I do very often because I’m a beginner in golang). I use LiteIDE on Windows to write the source and run tests, don’t know if it can do this for me.
In other words:
There are a few variables and functions that need to be as precise as possible (float64 or maybe bigger).
There are (much more) other ones that need to be as small and fast as possible when running on a Raspberry Pi, but not on a Windows or Linux Server (myPrecision, scalable before compiling). I want to be able to get the best out of the target system.
An Alias would do that for me.
In go 1.8.3, one can write:
import . “math”
type myPrecision float64
func doSomething (a,b,c float64,x,y,z myPrecision) (myPrecision){
return myPrecision(Sqrt(a*float64(x) + b/float64(y) + Pow(c,float64(z))))
}
Test runs tell me that the unnecessary conversions don’t affect speed and size of the executable.
The compiler notices that myPrecision == float64 and does nothing to convert them.
But it slows down writing and is not very readable, is it?
I don’t understand why it doesn’t notice the equality when I omit the pseudo-conversions …
First let me help you out with the exported Names problem you’re having. Only variable/type/function/etc names that are Capitalized will be exported from your package. Secondly, is there a reason that you don’t want to define a struct type with a float typed member? Because operations on the member field would be of type float whatever and you still get your ability to change the float type of the field on a single line?
Thank you. I didn’t try to export the type, I tried do define it in every package (which would work if it were an alias).
Maybe i should show you some of my source:
package mdo
import (
. "math32" // or "math"
math32 "math32"
math64 "math"
)
type (
mdoTime uint32 // unsigned Unix time, I don't need dates before 1970
mdoPrecision float32 // scalable precision to get the best out of the target system
)
const (
Value_Error mdoPrecision = 15000000
Value_Underflow mdoPrecision = 15000006
Value_Overflow mdoPrecision = 15000007
)
type DeviceValues struct {
Clock mdoTime
Chans []float64
}
type DeviceVars struct {
NextPoll mdoTime
QueryTime mdoTime
waitFor string
endChar uint8
IsActiv bool
}
type ChannelValues struct {
Value mdoPrecision
RawValue mdoPrecision
Time mdoTime
LastOKValue mdoPrecision
LastOKTime mdoTime
LastAvgValue mdoPrecision
LastAvgTime mdoTime
}
type ChannelVars struct {
freeSlot mdoTime
anzSlots mdoTime
}
type Chan struct {
Name string
Minimum mdoPrecision
Maximum mdoPrecision
ScaleOffset mdoPrecision
ScaleFactor mdoPrecision
Unit string
Format string
Formula string
Symbol string
MeanTime mdoTime
MeanType int8
RangeBehavior int8
Virtual bool
ChannelValues
ChannelVars
}
var (
Chans []Chan
ValLists [][]mdoPrecision
TimeLists [][]mdoTime
)
// in this function I import and scale a float64-value delivered by an external device:
func CalcValue(v float64, min mdoPrecision, max mdoPrecision, offset mdoPrecision, factor mdoPrecision, t mdoTime) mdoPrecision {
if math64.Abs(v) < 999999999 { // higher values are error codes
newv := mdoPrecision(v*float64(factor) + float64(offset))
if newv < min {
...
}
}
There are much more variables I would like to declare with an aliased type to make the definition self-explaining:
type (
mdoPointer uint16 // index of a device, channel, template etc. ...
deviceID int16
mdoDeviceType int16
...
)
Seems I have to wait until 1.9 gets stable.
I would just use float64 and accept that slow machines are slow.
Good point. I would rather live with longer execution times than with the horrible rounding and precision errors of a float32.
And I wonder if a RasPi is really restricted to float32. Even 32-bit processors can have math coprocessors that support float64 at hardware level.
Speed is not that much of an issue, but storage capacity is. I have to keep many huge arrays in RAM, thats the main reason for using float32. But when you have to average over half a million values, speed also becomes relevant.
Especially if it is a vectorial mean, and you have to transform all values from polar coordinates to cartesian.
When we talk about wind speed and direction, it is not necessary to be accurate to ten decimal places.
But all that is not the point. First and foremost, it is about readability and adaptability.
Why can’t we have such a simple thing as an alias, which solves both?
Seems you really have a valid use case for float32 then.
The type aliases that come with 1.9 might be what you are looking for, although, as @calmh indicated, they are still being discussed. Their primary use case is to help with refactoring the code base.
But when looking at the plain definition of type aliases, they would seem a good match for your purpose. I’d say test it out.
Regarding an earlier question from you about why a type is different from the type it has been derived from:
I don’t understand why it doesn’t notice the equality when I omit the pseudo-conversions …
The reason is safety. Consider
type Celsius int
type Fahrenheit int
Assigning a Farenheit value to a Celsius variable triggers an error, and rightly so. You would not want your code to overheat your living room because it regards 70°F as a Celsius value.
Back to your main problem, if the type aliases turn out to not meet your requirements, there is another possible option.
Packages like genny provide generic types through code generation. This, plus using build tags or file suffixes would allow generating platform- and architecture-specific floating point code.
Thank you very much, I will have a close look at genny.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.