"Merge" objects?

Hi,

For my applications I want to use transactions and try to use simple functions. So for now I follow the approach to provide a transaction on every call:

func doSth(tx *dbr.Tx, arg1 string, ...) {
    // .. Do sth with the tx
}

Since this is ok’ish it is a little annoying because you have always +1 argument. Now I saw this approach:

type ownTx struct {
    *dbr.Tx
}

func (t *ownTx) func doSth(arg1 string, ...) {
    // .. Do sth with the tx
}

This looks very tempting but:

With dbr (With sql it is pretty much the same) you receive a transaction with

tx, err := conn.NewSession(nil).Begin()

As you can see I would need to rewrite much code to make it possible to give me an instance of ownTx. Now I tried the following:

type ownTx struct {
    *dbr.Tx
}

tx, _ := conn.NewSession(nil).Begin()    
ownTxInstance := ownTx{tx}

But this does not work. Of course I could write a pseudo-constructor function I give an instance of *dbr.Tx that returns me a valid instance of ownTx with a tx attribute that contains the *dbr.Tx. But this is not really satisfying since I have to rely that instances of ownTx contain the *dbr.Tx.

So does anyone know a solution which could help me fix this? I also maybe later want to have an ownAnotherTx which contains even more dependencies.

Do you intend each function to use its own transaction, or multiple functions to use the same transaction?

If the latter, then I would think of ownTx not as holding the transaction, but holding whatever state is needed for a process. The process struct can also reduce error handling code at the point of process definition. For example,

package main

import (
	"log"

	"github.com/gocraft/dbr"
)

func main() {
	log.SetFlags(log.Lshortfile)

	conn, err := dbr.Open("sqlite3", "my.db", nil)
	if err != nil {
		log.Fatalln(err)
	}
	sess := conn.NewSession(nil)

	p := StartProcess(sess)

	p.First()
	p.Second()
	
	errs := p.Finish()
	for _, err := range errs {
		log.Println(err)
	}
}

type process struct {
	tx  *dbr.Tx
	err error
	// include other process state
}

func StartProcess(sess *dbr.Session) *process {
	p := &process{}
	p.tx, p.err = sess.Begin()
	return p
}

func (p *process) First() {
	if p.err != nil {
		return
	}

	// do something, set p.err on error
}

func (p *process) Second() {
	if p.err != nil {
		return
	}

	// do something, set p.err on error
}

func (p *process) Finish() []error {
	if p.err != nil {
		err := p.tx.Rollback()
		if err != nil {
			return []error{p.err, err}
		}
		return []error{p.err}
	}

	err := p.tx.Commit()
	if err != nil {
		return []error{err}
	}

	return nil
}

I see what you mean - Basically the command pattern. I thought about this but I wanted to have it more loosely coupled. But maybe this is a cleaner way…

Not really the command pattern as there was no intent to store enough information to run the process at a later time (though this could be done with a function literal).

The most loosely coupled approach is to have each function take the transaction as an argument:

func doSth(tx *dbr.Tx, arg1 string, ...) {
    // .. Do sth with the tx
}

This is also the most flexible approach, but tends to result in messy code.

What I suggested was that the functions were already related because they would be working on a common transaction. Factoring that common transaction (and error handling) transfers the complexity of that commonality from where the process is implemented (i.e., the functions are called and their results handled) to the definition of the steps for the process. This is only helpful when the process is complicated or there are multiple processes using the same functions.

If the functions don’t share the same transaction, then it is probably better for each function to create (and finish) the transaction themselves.

ok, I see. Thank you for your help.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.