Bulk load utility speed issue

Hi,
I have created a go utility which populates data into Couchbase from SQL Server.

The code is as follows:

package main
import (
      "database/sql"
      "log"
      "fmt"
      _ "github.com/denisenkom/go-mssqldb"
      "gopkg.in/couchbase/gocb.v1"
      )

func main() {

	var (
		items1 []gocb.BulkOp 
    	items2 []gocb.BulkOp 
    	items3 []gocb.BulkOp 		
	)

	cluster, _ := gocb.Connect("couchbase://localhost")
	bucket, _ := cluster.OpenBucket("example", "")
	condb, _ := sql.Open("mssql", "server=.\\SQLEXPRESS;port=62587; user id=sa;password=*******;")
	
	for i := 0; i < 20; i++ {
		items1 = nil
		items2 = nil
		items3 = nil
		//Get data From SQL Server
		getData(condb, &items1, &items2, &items3)
		// Bulk Load JSON into Couchbase Data Bucket 
		InsertData(bucket, &items1)
		InsertData(bucket, &items2)
		InsertData(bucket, &items3)
	}
	
	err := bucket.Close()
	if err != nil {
	   fmt.Println("ERROR CLOSING COUCHBASE CONNECTION:", err)
	   }
		
	err = condb.Close()
	if err != nil {
	   fmt.Println("ERROR CLOSING THE MSSQL CONNECTION:", err)
	   }
}

func getData(condb *sql.DB, items1 *[]gocb.BulkOp, items2 *[]gocb.BulkOp, items3 *[]gocb.BulkOp) () {
	var ID string
	var JSONData string
	var Batch int 
	query := "Exec PersonLocation.dbo.uspGetPersonLocation
	rows, err := condb.Query(query)
	if err != nil {
	   log.Fatal(err)
	   err = nil
	   }
	for rows.Next() {
	 err := rows.Scan(&ID,&JSONData,&Batch)
	 if err != nil {
		 fmt.Println("ERROR:",err)
	 }
	 switch Batch {
		 case 1 : 
		 *items1 = append(*items1, &gocb.UpsertOp{Key: ID, Value: JSONData})
		 case 2 :
		 *items2 = append(*items2, &gocb.UpsertOp{Key: ID, Value: JSONData})
		 case 3 :
		 *items3 = append(*items3, &gocb.UpsertOp{Key: ID, Value: JSONData})
	 }
	}	
}

func InsertData(bucket *gocb.Bucket, item *[]gocb.BulkOp) (){
	err := bucket.Do(*item)
	if err != nil {
	   fmt.Println("ERROR PERFORMING BULK INSERT:", err)
	   }
}

In one iteration of the loop, approximately 5100 records are being inserted.
Ideally, it should give a speed of approx. 5000 inserts per second and If I launch multiple instances of the same application, the insert speed should also multiply and increase.
Initially, it worked fine (I launched 6 instances at a time and the insert speed was approx.15000 inserts per second) but after a month or so, it became very unstable and the insert speed has decreased drastically. there hasn’t been any change in the hardware configuration as well. I am not able to find out what might be causing it to become this slow.
Please help…

As the issue started appearing after having worked fine for a month, I would suspect that the database is getting slow. I don’t know Couchbase, but I can imagine that one or more indexes are going to slow down the inserts.

In general, if a database table has one or more indexes set up (for fast retrieval), then every time a new value is inserted, the indexes must be updated. The larger the index trees are, the longer it takes to update them.

At a certain point, it may become faster to remove the index, then to the bulk update, and then regenerate the index. Disclaimer: I know this behavior mainly from SQL databases, but I bet the problem is also inherent to any other database type that uses indexes.

If Couchbase has good performance troubleshooting tools and features available (e.g. useful log output), I would start there and try finding out where most of the time is lost. (I am sure the Couchbase forum contains some tips for improving performance.)