GZIP decompression speed

Hi all, I’m writing an application that needs to decompress a large amount of gzipped data that I have in memory (downloaded from the Internet)

I did some simple benchmarking, decompressing one single file of about 6.6M:

  1. saving data to disk and calling gzcat on it, getting result from stdout
  2. calling gzcat and writing to stdin, getting result from stdout
  3. using the standard compress/gzip library
  4. using pgzip library
  5. using this optimized gzip library

Using 1 and 2 I get almost the same result (I have an SSD so probably writing the file is very fast) and it is better than the others .
Method 3 is the worst, being almost 100% slower than using gzcat .
Methods 4 and 5 are almost the same, and are about 40% slower than gzcat .

My question is, how can saving data to disk and calling an external program be so much faster than using the Go implementation ?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.