A typical RDBMS does not return the number of rows along with the query result. And it does not necessarily send all resulting rows at once (and therefore the client cannot count the rows either). Both is done for efficiency. Counting all rows in a query result can be an expensive operation, especially with queries that are complex and/or return large numbers of rows. And sending a million rows when the client only wants to use the first ten rows is definitely an overkill.
With select count(*) you basically tell the RDBMS, “get me the number of rows even if you have to join 35 tables and count through millions of rows.”
I understand that. But what I’m not understanding is, how are we supose to check if a query result in a specific number of rows, if the error ( when there is no rows) gets trigger.
I have something like
query := `SELECT id
FROM follow
WHERE account_0 = 17
AND account_1 = 16 LIMIT 1;`
searchFollow := SearchFollow{}
err := db.Get(&searchFollow, query)
I want to continue if this query does not return any rows and fail if returns more than 0.
But the err automatically returns error saying “sql no results”
But I don’t want it to return error… I want to handle it.