Any way to preallocate slice for SQL rows.Next iteration?

Hi,

I would like to improve performance by preallocation slice (var posts []*PostModel) or by doing something else but not sure how to do that without knowing how many records rows has. Hence reason I have to use posts = append(posts, post). Could please someone give me an idea?

Thanks

Note: I have same issue in all RDBMS packages like sql/database, MySQL, Postgres pg, pgx so on.

func (s Storage) ListPostsByUser(ctx context.Context, args ListPostsByUserArgs) ([]*PostModel, error) {
	ctx, cancel := context.WithTimeout(ctx, s.timeout)
	defer cancel()

	qry := fmt.Sprintf(`
	SELECT
	id, user_id, text, created_at, deleted_at
	FROM posts
	WHERE user_id = $1
	ORDER BY %s
	LIMIT $2
	OFFSET $3
	`,
		args.OrderBy,
	)

	rows, err := s.Query(ctx, qry, args.UserID, args.Limit, args.Offset)
	if err != nil {
		return nil, err
	}
	defer rows.Close()

	// START: Try to enhance starting from here --------------------------------
	var posts []*PostModel

	for rows.Next() {
		post := &PostModel{}

		err := rows.Scan(
			&post.ID,
			&post.UserID,
			&post.Text,
			&post.CreatedAt,
			&post.DeletedAt,
		)
		if err != nil {
			return nil, err
		}

		posts = append(posts, post)
	}
	// END: End of enhancement -------------------------------------------------

	if err := rows.Err(); err != nil {
		return nil, err
	}

	return posts, nil
}

This SO article gives the set of options that I think are your only real options.

I don’t think there’s any built in method for getting the count. Another idea is you could just guess. If you know there’s going to be at least X records returned… preallocate that much. If you guessed too much, just sub slice your slice. If you guessed too little, append the number that you were short (which isn’t ideal, but it’s better than appending every time!).

Perhaps you can run a count DB query periodically (in another thread) to come up with your guess.

I would also always recommend doing some micro benchmark tests or profiling your app! Append is pretty fast - unless there’s millions of records, the query itself likely takes much longer than the appends.

Not sure if I understood your case, but if your are doing a paginate query, it would bring “limit” records from your database, so in your posts slide could have at most “limit” size.
So you can do something like:

posts := make([]*PostModel, args.Limit)
index :=0
for rows.Next() {
    ...
    posts[index++] = post
}

I tried all implementations but the final outcome was not reasonable case to push forward because I end up handling/removing empty objects from slice because I won’t necessarily always get args.Limit from DB. I think the cleanest and safest approach is just stick to what I have. Thank you both for answers.