Go http.Client{} and transport

Recently, an issue was reported from the site. While investigating the root cause, I found the following error in the logs: “connectex: Only one usage of each socket address (protocol/network address/port) is normally permitted.” Upon reviewing our implementation, I discovered that the service is creating a new http.Client{} instance every time an HTTP/HTTPS request is sent to the server.

From what I understand, creating a new instance of http.Client{} for each request is unnecessary. The possible solutions are:

  1. Use a single http.Client{} instance to send multiple requests.
  2. Create http.client instance for every request and use the HTTP header Connection: close to avoid reusing connections.

I am leaning toward the first approach of using a singleton http.Client{} instance. However, before proposing this solution, I have a couple of questions and would need your help for better clarity:

  1. Can we send multiple requests using a single http.Client{} instance? Specifically, does a single http.Client{} support concurrent requests?
  2. If we use a single http.Client{} to send concurrent requests, I understand that it has its own connection pool. After completing the TLS handshake and serving a request, it keeps the connection open for 90 seconds, allowing subsequent requests to reuse the same connection. Is there a security concern in keeping the connection open for such a duration?

Thanks,
Madhusudan.

  1. This is how it was originally designed.
  2. Are there any security issues with long connections? This will only increase performance overhead a little. This design is to save time in establishing connections for multiple requests to the same target, and to respond faster.

Hi,

Thank you for the reply.

I was testing the behavior by creating single http.client but to my surprice I get similer/same issue.

Below is the code which I have written to test the changes.

func main() {
go CreateServer()
url := “http://127.0.0.1:1111” // Target URL (can point to a dummy server)
fmt.Println(“Starting to generate requests…”)

transport := &http.Transport{
	MaxIdleConns:    100,
	IdleConnTimeout: time.Minute,
}

client := &http.Client{Transport: transport}

// Simulate rapid requests without connection reuse
for i := 0; i < 100000; i++ {
	go func() {
		// Send a request
		httpReq, _ := http.NewRequest("GET", url, nil)
		//httpReq.Header.Add("Connection", "close")
		resp, err := client.Do(httpReq)
		log.Printf("Count : %d", i)
		if err != nil {
			log.Println("Error:", err)
			return
		}
		// Intentionally NOT closing the response body
		// This simulates socket/resource exhaustion
		//_ = resp
		resp.Body.Close()
		log.Print("response closed")
	}()
	time.Sleep(1 * time.Millisecond) // Minimal delay between requests
}

// Let goroutines run for a while
time.Sleep(10 * time.Second)
fmt.Println("Done generating requests.")
os.Exit(111)

}

func CreateServer() {
http.HandleFunc(“/”, func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, “Hello, World!”)
})
http.ListenAndServe(“:1111”, nil)
}

This is the log and you can see the request fail at 16107 th time.

OPTCTRL:2025/01/09 15:56:41.480822 optctrl.go:85: response closed
OPTCTRL:2025/01/09 15:56:41.491882 optctrl.go:76: Count : 15905
OPTCTRL:2025/01/09 15:56:41.491882 optctrl.go:85: response closed
OPTCTRL:2025/01/09 15:56:41.511168 optctrl.go:76: Count : 16108
OPTCTRL:2025/01/09 15:56:41.511168 optctrl.go:78: Error: Get “http://127.0.0.1:1111”: dial tcp 127.0.0.1:1111: connectex: No connection could be made because the target machine actively refused it.
OPTCTRL:2025/01/09 15:56:41.559336 optctrl.go:76: Count : 15906
OPTCTRL:2025/01/09 15:56:41.559872 optctrl.go:85: response closed

Am I missing something here.

Thanks,
Madhusudan

Your test code has some concurrency risks, I made some modifications:

func test() {
	go func() {
		_ = http.ListenAndServe(":7777", http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) {
			_, _ = writer.Write([]byte("hello"))
		}))
	}()

	transport := &http.Transport{
		MaxIdleConns:    100,
		IdleConnTimeout: time.Minute,
	}
	client := &http.Client{Transport: transport}

	var count uint64
	var wg sync.WaitGroup
	for i := 0; i < 100000; i++ {
		wg.Add(1)
		go func() {
			defer wg.Done()
			resp, err := client.Get("http://localhost:7777")
			if err != nil {
				panic(err)
			}
			defer resp.Body.Close()
			if resp.StatusCode == http.StatusOK {
				fmt.Println(atomic.AddUint64(&count, 1))
			}
		}()
		time.Sleep(time.Millisecond * 1)
	}
	wg.Wait()
	fmt.Println("all", count)
	return
}

In my test case, there is no error.

Thank you for taking time to understand the code and providing the better approach.

You may have seen my earlier response. I misundrestood that code, Please ignore. I ran you code and I got the same error at around the same count. Below is the console log -

16146
16147
16148
16149
16150
panic: Get “http://localhost:7777”: dial tcp [::1]:7777: connectex: Only one usage of each socket address (protocol/network address/port) is normally permitted.

goroutine 145233 [running]:
main.main.func2()
C:/PDI/repo/opt/optctrl/app/optctrl.go:71 +0x12b
created by main.main in goroutine 1
C:/PDI/repo/opt/optctrl/app/optctrl.go:67 +0x376
panic: Get “http://localhost:7777”: dial tcp [::1]:7777: connectex: Only one usage of each socket address (protocol/network address/port) is normally permitted.

goroutine 145149 [running]:
main.main.func2()
C:/PDI/repo/opt/optctrl/app/optctrl.go:71 +0x12b

This seems to be just a problem with the Windows configuration. Some configuration limits the maximum number of connections. This is a problem at the network layer, not with http.Client. There is no error when testing on my Ubuntu.

That is also true, My windows machine has 16384 ports supported. But ideally http.Client must reuse the connection meaning the port also should be reused if I am not wrong, rather than using new port. I could not understand the reason why it is behaving this way.

After the request, the connection may not have been placed in the active connection pool.

type tListener struct {
	net.Listener
}

var x int

func (t *tListener) Accept() (net.Conn, error) {
	x++
	fmt.Println(x)
	return t.Listener.Accept()
}

func (t *tListener) Close() error {
	return t.Listener.Close()
}

func (t *tListener) Addr() net.Addr {
	return t.Listener.Addr()
}

func test() {
	go func() {
		listen, err := net.Listen("tcp", ":7777")
		if err != nil {
			panic(err)
		}
		defer listen.Close()
		listen = &tListener{listen}
		_ = http.Serve(listen, http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) {
			_, _ = writer.Write([]byte("hello"))
		}))
	}()

	transport := &http.Transport{
		MaxIdleConns:    100,
		IdleConnTimeout: time.Minute,
	}
	client := &http.Client{Transport: transport}

	//var count int64
	var wg sync.WaitGroup
	for i := 0; i < 100000; i++ {
		wg.Add(1)
		go func() {
			defer wg.Done()
			//atomic.AddInt64(&count, 1)
			resp, err := client.Get("http://localhost:7777")
			if err != nil {
				panic(err)
			}
			defer resp.Body.Close()
			if resp.StatusCode == http.StatusOK {
				//fmt.Println(atomic.AddInt64(&count, -1))
			}
		}()
		time.Sleep(time.Millisecond * 10)
	}
	wg.Wait()
	return
}

I ran the tests in macos m3 and found that adjusting the request interval effectively slowed down the growth rate of new connections (e.g., 10ms)

If you want a full investigation, you can try counting connections from the server like this.

var count int64

type tConn struct {
	net.Conn
	once sync.Once
}

func newConn(raw net.Conn, err error) (net.Conn, error) {
	if err == nil {
		fmt.Println(atomic.AddInt64(&count, 1))
		return &tConn{Conn: raw}, nil
	}
	return nil, err
}

func (c *tConn) Close() error {
	c.once.Do(func() {
		fmt.Println(atomic.AddInt64(&count, -1))
	})
	return c.Conn.Close()
}

type tListener struct {
	net.Listener
}

func (t *tListener) Accept() (net.Conn, error) {
	return newConn(t.Listener.Accept())
}

func test() {
	go func() {
		listen, err := net.Listen("tcp", ":7777")
		if err != nil {
			panic(err)
		}
		defer listen.Close()
		listen = &tListener{listen}
		_ = http.Serve(listen, http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) {
			_, _ = writer.Write([]byte("hello"))
		}))
	}()

	transport := &http.Transport{
		MaxIdleConns:    100,
		IdleConnTimeout: time.Minute,
	}
	client := &http.Client{Transport: transport}

	//var count int64
	var wg sync.WaitGroup
	for i := 0; i < 100000; i++ {
		wg.Add(1)
		go func() {
			defer wg.Done()
			//atomic.AddInt64(&count, 1)
			resp, err := client.Get("http://localhost:7777")
			if err != nil {
				panic(err)
			}
			defer resp.Body.Close()
			if resp.StatusCode == http.StatusOK {
				//fmt.Println(atomic.AddInt64(&count, -1))
			}
		}()
		time.Sleep(time.Millisecond * 1)
	}
	wg.Wait()
	return
}

At the moment, the most reasonable explanation is that you’re making a large number of requests (100,000) in a short period of time, which might result in underopportunistic connection multiplexing, because new connections are created before they’re actually reused.

By batching, reuse can be observed more intuitively.

func test() {
	go func() {
		listen, err := net.Listen("tcp", ":7777")
		if err != nil {
			panic(err)
		}
		defer listen.Close()
		listen = &tListener{listen}
		_ = http.Serve(listen, http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) {
			_, _ = writer.Write([]byte("hello"))
		}))
	}()

	client := &http.Client{Transport: http.DefaultTransport}

	var wg sync.WaitGroup
	for i := 0; i < 100000; i++ {
		wg.Add(1)
		go func() {
			defer wg.Done()
			resp, err := client.Get("http://localhost:7777")
			if err != nil {
				panic(err)
			}
			defer resp.Body.Close()
			if resp.StatusCode == http.StatusOK {
			}
		}()
		if i%100 == 0 {
			time.Sleep(1 * time.Second)
		}
	}
	wg.Wait()
	return
}