Last week got tons of alerts of too many open files
on a app running on a restricted environment. On this environment there is a hard limit of 5k file descriptors per docker container and the app was always surpassing this limit.
This app does lots, and lots, and lots of requests on several microservices to generate a final object that is indexed on elasticsearch. Most of this requests are from blocks of code like this:
resp, err := c.httpClient.Do(req)
if err != nil {
return errors.Wrap(err, "error during http request execution")
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return errors.New("invalid response status code")
}
content, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, errors.Wrap(err, "error during response body read")
}
I’m always closing the response body, but, sometimes the services return a non valid status code and the function just returns without reading the response body. The quantity of time wait connections goes up and happens the problem with file descriptors.
I tried to change the MaxIdleConnsPerHost
and MaxIdleConns
without success, the number of connections still goes up.
I think this happens because there still content on body to be read, so the runtime don’t reuse the connection and create another one every time.