Efficiently chunks large file and upload to S3

Hi golang experts.
The case is chunking large file (1 gb < x < 5gb), calculate zsync file (think of it signature-like file if it’s doesn’t sound familiar) and then upload both chunks and zsync files to Amazon S3.
Initially I propose this design. Let’s say the file is 3gb and I decide to chunk it each 1gb :
fileA_5gb.bin
|-- fileA_chunk1 (buffer 1)
|-- fileA_chunk2 (buffer 2)
|-- fileA_chunk3 (buffer 3)
Then after that, I calculate the signature file and upload each chunk, along with its signature to S3.
Since I don’t want to fill the memory with 1 gb of bytes, I suppose I could use bufio for this? I haven’t deeply think the consequence or complexity, tho.

But anyway a proper API suggestion from you will be awesome, I guess?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.