Some query regarding my code to deal with io.Reader in golang

Hi

I am trying to do some streaming from a cloud storage say S1 to my storage S2 in golang

// Backup function
func Backup(<some arguments>, uploadPathName string, objectReader io.Reader) {

	ctx := context.Background()

	// Create an upload handle.
	// some code
	fmt.Printf("\nUploading ....")

	_, err = io.Copy(upload, objectReader)
	if err != nil {
		abortErr := upload.Abort()
		log.Fatal("Could not upload data", err, abortErr)
	}

	// Commit the upload 
        // some code
}

Suppose I have the file of size 15 MB stored in my S1’s instance
When I run my code, then the backup stops midway without showing any errors & it backups the partial file to my storage.

No other cloud storage shows any such error with this code.

For S1, I had to specially use this code which involves complicated SectionReader which can be used with minio.Object but not with io.Reader :

// Backup function
func Backup(<some arguments>, uploadPathName string, objectReader *minio.Object) {

	ctx := context.Background()

	// Create an upload handle.
	// some code
	fmt.Printf("\nUploading ....")

	var lastIndex int64
	var numOfBytesRead int
	lastIndex = 0
	var buf = make([]byte, 32768)

	var err1 error
	for err1 != io.EOF {
		sectionReader := io.NewSectionReader(objectReader, lastIndex, int64(cap(buf)))
		numOfBytesRead, err1 = sectionReader.ReadAt(buf, 0)
		if numOfBytesRead > 0 {
			reader := bytes.NewBuffer(buf[0:numOfBytesRead])
			// Try to upload data n number of times
			retry := 0
			for retry < MAXRETRY {
				_, err = io.Copy(upload, reader)
				if err != nil {
					retry++
				} else {
					break
				}
			}
			if retry == MAXRETRY {
				log.Fatal("Could not upload data: ", err)
			}
		}

		lastIndex = lastIndex + int64(numOfBytesRead)
	}

	// Commit the upload 
    // some code
	
    // Close file handle after reading from it.
	if err = objectReader.Close(); err != nil {
		log.Fatal(err)
	}
}

But, I want the function compatible with io.Reader & also to backup the file completely

So, I would like to know that:
Whether the issue is in that cloud storage S1 or in my code?

Impossible to tell without having access to your S1.

S1 is Zenko cloud storage

My zenko instance is at Zenko sandbox free instance at https://admin.zenko.io (resets after every 48 hours)

This isn’t “elegant” by any means, but it’s helped me catch infinite loops; maybe it will help you, too! The runtime/pprof package has a Profile type with a WriteTo method that you can use to print stack traces of all goroutines. You can use os/signal.Notify to register a handler to catch os.Interrupt (i.e. Ctrl + C) and print your goroutine traces before closing:

import (
    "os/signal"
    "runtime/pprof"
)

func main() {
    // ...
    c := make(chan os.Signal)
    signal.Notify(c, os.Interrupt)
    defer close(c)
    go func() {
        for sig := range c {
            pprof.Lookup("goroutine").WriteTo(os.Stderr, 2)
            signal.Stop(c)
            break
        }
    }
    // ...
}

That might help you identify where in your code or where in io.Copy that the issue is occurring.

Perhaps easier (doesn’t work on Windows though) you can kill -QUIT the process which will produce a backtrace (this is also equivalent to killing the process with CTRL-\ instead of CTRL-C). If you do this when it is stuck then you can usually see quite quickly if it is deadlocked.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.