I Implemented a Java-Like, lazy generic stream programming framework in GO

Except GO, I also program with Java, kotlin and rust.I am fascinated with Java’s stream programming framework and rust’s iterator because I think it gives programmers a higher abstraction of how to processing their data.
Unfortunately, Go does not support it (I think it is partially because go dose not support generic also).
So I implement it by myself, do type check at runtime. Although it loss some runtime performance and static type check, it does work, with an API very similar to Java.

import r "reflect"

func Test_Typical_SliceStream(t *testing.T) {
	target := []int{1, 4, 9, 16, 36}
	slice, err := SliceStream([]string{"1", "2", "3", "4", "55", "6"}).
		Filter(func(it string) bool { return len(it) < 2 }).
		Map(func(it string) int {
			i, err := strconv.Atoi(it)
			if err != nil {
				i = 0
			}
			return i * i
		}).
		Collect()
	if err != nil {
		panic(err)
	}
	if !r.DeepEqual(target, slice) {
		panic(fmt.Sprintf("target slice is: %v, while get: %v\n", target, slice))
	}
}

Furthermore, its high scalability allows you define your own Object Resolver using reflection.
And it supports convert an ObjectStream to a MapEntryStream:

import r "reflect"

type doubleIntResult struct {
	result r.Value
}

func (dr *doubleIntResult) Result() r.Value {
	return dr.result
}

func (dr *doubleIntResult) Ok() bool {
	return true
}

// multiple an int (to int64 because of reflecting)
type multiInt struct {
	fac int
}

func (dr *multiInt) Invoke(v r.Value) IResolveResult {
	return &doubleIntResult{result: r.ValueOf(v.Int() * int64(dr.fac))}
}

// Representing int time during reflecting
var zero int64

func (dr *multiInt) OutType() r.Type {
	return r.TypeOf(zero)
}

func Test_Custom_Resolver(t *testing.T) {
	s := []int{-1, 0, 1, 2, 3, 4, 5}
	target := []int64{3, 6, 9, 12, 15, 18}

	var slice []int64
	err := SliceStream(s).
		Map(func(it int) int { return it + 1 }).
		Filter(func(it int) bool { return it > 0 }).
		Resolve(&multiInt{3}). // object becomes int64 after that
		CollectAt(&slice)
	if err != nil {
		panic(err)
	}
	if !r.DeepEqual(slice, target) {
		panic(fmt.Sprintf("target slice is :%v, while get: %v\n", target, slice))
	}
}

func Test_StreamToMap(t *testing.T) {
	s := []int{0, 1, 2, 3, 4, 5, 6}
	targetMap := map[int]string{0: "0", 1: "1", 2: "2", 3: "3", 4: "4", 5: "5"}
	res, err := SliceStream(s).
		AsMapKey(func(it int) string { return strconv.Itoa(it) }).
		FilterValue(func(it string) bool { return it != "6" }).
		Collect()
	if err != nil {
		panic(err)
	}
	if !r.DeepEqual(targetMap, res) {
		panic(fmt.Sprintf("target map is: %v while get: %v\n", targetMap, res))
	}
}

By the way, I realized that with coming out of generic in 1.18, it is possible that Go will supply its standard GENERIC Collection framework and stream framework.
However I still do this work for 2 reasons, 1: many companies using old version of goalng for many reasons. 2. Collection framework and stream programming framework is a huge project, I can’t wait for them.
And Go official may just leave Generic Collection and Stream Programming away…

Project github address is : xuc1995/gostream: A go module supply Java-Like generic stream programming (while do type check at runtime) (github.com), any suggestion and contribution is welcome.

Hi @xuc1995,

Thank you for sharing your work here.

Regarding the use of old Go versions, I believe this decision often results from not knowing the Go 1 compatibility promise. This promise basically states that the Go 1.x toolchain and standard library is kept backwards compatible to support existing code to the greatest extent possible. (That is, unless there is a super important breaking change to make, such as fixing a severe security hole.)

I believe the mindset of “don’t upgrade the toolchain unless you can’t avoid it” is mostly driven from experience with other languages where toolchain upgrades can indeed break the code base easily.

And with regard to support of generic stream processing in the Go standard library, don’t hold your breath. It likely will not come soon. The process of getting something new into the standard library is a quite lengthy one, and rightly so. Code in the standard library should be tested and battle-proven, as it needs to be stable for years and years to come.

It is more likely that third-party stream processing packages (yours perhaps?) will pop up much faster.

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.