Net/http too many open files issue

Hi,
I am working on a SAAS based project which is built using REST API and can have multiple merchant accounts. We need to run crons for every merchant per 10 mins. The cron urls are our server’s own apis. So I used Golang’s net/http package to fulfil this requirement.

Errors I am getting:
http: Accept error: accept tcp [::]:8080: accept4: too many open files; retrying in 1s
and
connection() error occured during connection handshake: dial tcp IP:27017: socket: too many open files.

What I have tried:
I have tried connection: close header and removing defer and call explicit req.Body.Close() and reuse http client to avoid connection overlimit.

Have a look at the code:

package main

import (
	"gopkg.in/robfig/cron.v3"
)

func main() {
	RunCron()
}

func RunCron() {
	c := cron.New()
	c.AddFunc("@every 0h10m0s", SendMsg)
	c.Start()
}

func SendMsg() {
	RunCronBatch()
}

func RunCronBatch() {
	businessNames := []string{"bsName1", "bsName2","bsName3","bsName4"}
	client := &http.Client{}
	for _, businessName := range businessNames {
		url := "http://" + businessName + ".maim-domain.com"
		go func(client *http.Client, url string) {
			CallHttpRqstScheduler(client, "GET", url, false)
		}(client, url)
	}
}

func CallHttpRqstScheduler(client *http.Client, method, url string, checkResp bool, body ...io.Reader) (error, interface{}) {
	var reqBody io.Reader
	if len(body) > 0 {
		reqBody = body[0]
	} else {
		reqBody = nil
	}

	req, err := http.NewRequest(method, url, reqBody)
	if err != nil {
		fmt.Print(err.Error())
		return err, nil
	}
	req.Header.Set("Connection", "close")
	if method == "POST" {
		req.Header.Add("Content-Type", "application/json")
	}
	resp, err := client.Do(req)
	if err != nil {
		fmt.Print(err.Error())
		return err, nil
	}
	if checkResp {
		bodyBytes, err := ioutil.ReadAll(resp.Body)
		if err != nil {
			fmt.Print(err.Error())
			req.Close = true
			resp.Body.Close()
			return err, nil
		}
		var response APIResponseObj
		err = json.Unmarshal(bodyBytes, &response)
		if err != nil {
			fmt.Print(err.Error())
			req.Close = true
			resp.Body.Close()
			return err, nil
		}
		req.Close = true
		resp.Body.Close()
		return nil, response.Response.Data
	}
	req.Close = true
	resp.Body.Close()
	return nil, nil
}

But this structure still giving me the above mentioned errors.

History:
Before implementing this structure I have also used curl along with sh command. But that structure was also eating up resources and hence came across resource temporarily unavailable.

    package main

func main() {
    businessNames := []string{"bsName1", "bsName2","bsName3","bsName4"}
    for _, businessName := range businessNames {
        url := "http://" + businessName + ".main-domain.com"
        command := "curl '"+url+"'"
        ExecuteCommand(command)
    }
}

func ExecuteCommand(command string) error {
    cmd := exec.Command("sh", "-c", command)
    var out bytes.Buffer
    var stderr bytes.Buffer
    cmd.Stdout = &out
    cmd.Stderr = &stderr

    err := cmd.Run()

    if err != nil {
        fmt.Println(fmt.Sprint(err) + ": " + stderr.String())
        return err
    }
    fmt.Println("Result: " + out.String())
    return nil
}

Can someone guide me that in the http approach what I am doing wrong or missing something here.

How long is businessNames? If it’s thousands or more, you might be exhausting the number of concurrent connections you can have open. If that’s the case, instead of creating new goroutines for each request, create a pool of workers (maybe 10, maybe 100, etc.) and sending the business names in via a channel and then having a consumer goroutine pull the results back out.

I would also recommend deferring resp.Close right after checking if err != nil after client.Do(req), just to make sure the response body isn’t leaking and holding a connection open.

1 Like

Yes, the business names can be thousands. I have used resp.Close() before every return (tried with defer as well). Tried with worker pool as well but this error does not seem to be resolved. https://play.golang.org/p/pMePF8zDvSH.

I guess that error come from the operating system. Perhaps you are on a default Linux instalation where any connection use a file descriptor from 65535 teoretically posible. In such as case even if you properly close the connection it will still take a while for the system to free the descriptor (remember the zombie state of a connection).
So, if this happen you can limit your requests or you can tune your operating system to accept more open files (look for ‘ulimit’ command).

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.