Currently, we have a microservice based environment.
And we follow a typical MVC model for all of our transactional business flows.
Main point: We have a repository layer which handles of our database transactions.
My question is:
Is there organic way we can track which repository layer functions have unclosed transactions ?
Basically, we use a connection pooling mechanism for all of our database transactions.
I want to find out which repository function has not closed database connections properly.
How can i do that without adding any extra wrappers or extra structures ?
But since that is a manual step, developers tend to forget to add that critical closing step sometimes leading to database connection leaks.
Is there anyway we can build a framework/automated function/ such that developers don’t have to worry about closing connections and focus more on the business logic ?
Most database management systems (DBMS) offer tools to monitor connection pools. Look for metrics like total connections , active connections , idle connections , and connection duration . A high number of idle connections or connections lasting significantly longer than typical user interactions might indicate potential leaks.
Honestly speaking, we have something similar at our org as well.
But what I had in mind was to actually track database connections getting acquired/released at a function level.
For example, if there is a function ListCustomers, I would like to know if the database connections opened by this function have been closed/returned to the pool/not held unnecessarily.
I am learning about pooling. And so far I found that there may be a difference between pooling at server level and pooling at database level. One pool at server level all databases can be shared by one pooling and not reach the connection limit at server level.
Pooling at database level may reach the server limit as each database pool adds to the session list. And the server stops responding.
And you can close each connection at router (endpoint manager) level instead of each database call.
In my case it is a Postgresql server. And by default the session limit is set to max 100 sessions. At the moment I have pooling at database level and 2 databases. Which means that if I have 50 sessions for each database, the database limit of 100 (50+50) is reached and server session limit hits the roof.
Many database connection pools log information about connections, including creation, acquisition, and closure. Analyze these logs for connections held for unusually long periods, indicating potential leaks.
If you want to track the function which opens a transaction and forgets to close it at runtime, your only way is to create a wrapper-structure to inject your hook into the db.Begin() and transaction.Commit() / Rollback() commands.
Since you don’t want to change a lot of code and sql.DB (or gorm.DB) are types, I see three ways.
First: Quick One Off solution: Create a local copy of the relevant package and put your hook into the source-code. Then use the local version with a go mod edit -replace directive.
Second: Create a wrapper structure as a DB-driver, which wraps the original driver, delegates everything and runs your hooks.
Third: Create a wrapper structure between your own code and gorm: You will have to update all references to gorm.DB to call your wrapper instead - but if you own the codebase this will be a fast replace all and less complex than Second.
Regardless of the solution you implement you should also be using a code linter (like golangci-lint) and define a rule which errors code which doesn’t immediately defer a close on database connections.