Gorgonia (deep learning toolkit written in pure Go) v0.7.1 Released

Gorgonia v0.7.1 has been released. There has been a large number of API changes, mainly to the tensor subpackage. A write up, including background of why the changes were made and the problems faced can be found in this blog post - there are some notes mixed up in there about generics in Go as well.

Breaking Changes

The entire tensor subpackage has been refactored. There has been a number of API changes, and there are structural changes. This is also the first version with a semver-compatible version (previous releases were unversioned).

For the most part if you have been using the exported functions, there will not be any change. If you have been using the methods of the *Dense tensor, there are some changes with regards to the naming of methods for interactions with scalar values.

The refactored library allows for greater extensibility (you can now use CUDA/OpenCL on the Tensor data structure if you have the correct execution engine for it).

Here is the list of changes to the function/method names

Old New
Trans AddScalar
TransInv/TransInvR SubScalar
Scale MulScalar
ScaleInv/ScaleInvR DivScalar
PowOf/PowOfR PowScalar

Where in the past, the -R suffix indicate that the scalar value is on the right, the new -Scalar methods requires an argument to indicate if the tensor is the left value. This standardization has been fixed in gorgonia proper as well.

Performance Impact

  • There has been a ~100% improvement on tensor-tensor operation on contiguous slices - this restores the performance to the same level as January this year, where a refactor. The flip side is operations on non-contiguous slices now perform up to 70% worse.
  • There has been a ~20% improvement on tensor-scalar operations
  • There has also been a 90+% decrease in amount of memory used when it comes to performing operations. This was caused by the uniformizing of functions to two possible code paths: either using an Iterator or not. Future work will be on speeding up Iterator .

What This Means: if you work on deep learning related stuff, you have to know how your data is laid out, because that affects performance (but you probably already knew this). This release actually improves performance on contiguous layouts. Future work will keep treating contiguous data as a first class citizen.

API Surface

In the old version, there were 65 exported types, functions and methods. In v0.7.1, there are now 138 exported types, functions and methods.

New Things

  • Compressed Sparse Row/Columns are now supported. The data structure is *CS. There are 4 constructor functions, and there may be more to come (owing to the fact that there are many, many ways to create a sparse matrix.
  • Mod is now supported as an operation on the Tensors.
  • FMA is now added as a function to perform -Axpy type operations.
  • New methods for the Tensor types allow users to better inspect how the data is laid out
  • Extensibility of the data structures has now been improved. An example of how to extend the generic data type can be found here: extension example.

Future

The next release will change the import location for the tensor package, and will also carry some refactors in Gorgonia to enable better CUDA usage.

4 Likes

nice!

do you plan on migrating to gonum.org/v1/gonum/{mat,graph} instead of using the now deprecated github.com/gonum/{matrix/mat64,graph} imports ?

(do you need some help with that?)

keep 'em coming :slight_smile:
-s

1 Like

Answer is yes (gonum does all the heavy lifting) - that’s literally the next thing:

And yes, I will need help. I’m starting midweek this week (taking a few days off coding )… will ping you on slack

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.