New to Go: SQL to CSV

Hi new to go. trying to generate a csv report from a group of mssql databases. I have with help from friend been able to get the information I need to screen, by iterating through the data using rows.Scan function, and I can present the information captured to the screen, but I can not seem to be able to link it to a module that successfully outputs a csv file. Any help would be appreciated.

The easiest way would be to use a package like this : “github.com/joho/sqltocsv

DB, err := sql.Open("postgres", models.Config.PsqlInfo)
if err != nil {
	panic(err)
}
defer DB.Close()
result, err := DB.Query(`Select * from Database`)
if err != nil {
	log.Fatal(err)
}
csvConverter := sqltocsv.New(result)
csvConverter.WriteFile("result.csv")

Have a look at Joho´s readme.

1 Like

Hi, that’s similar to what I have tried. my issue is I have to split the select up as its complicated, and couldn’t work out how to feed that into the query, I was using row scan to file connect the data from the DB.

tsql := fmt.Sprintf(sql)

// Execute query
rows, err := db.QueryContext(ctx, tsql)
if err != nil {
    return -1, err
}

That’s how I had to build my DB Select statement, and as such I can not use the db.query with the context.

Also I have tried to use this with a simplified select query, and get error
.\summarybk_multidb2.go:213:30: undefined: result - What am I doing wrong. cheers.

Have you tried to range over the rows like this (It´s tough to follow you example without any data examples )

file, err := os.Create("result.csv")
checkError("Cannot create file", err)
defer file.Close()

    writer := csv.NewWriter(file)
    defer writer.Flush()

for row,_ := range rows {
    err := writer.Write(row)
    checkError("Cannot write to file", err)
}

func checkError(message string, err error) {
    if err != nil {
        log.Fatal(message, err)
    }
}

sorry not sure how to get it all as code. But from func ReadSQL() It is my code. shown to screen currently to prove it works, I use a config file to point to multiple databases. Hopefully that shows how I have been getting info out of db on a scan. Cheers.

Ok, a quick and dirty solution would be to sprintf the variables together to one string per row and store them in a map[int]string where count could be your int and string would be the joint row. Then range over the map and write line by line to a file, like described above.

// add before the loop
mapofrows = make(map[int]string)

/// add to your rows.Next() loop after log.Println()
a:=fmt.Sprintf( server_name,database_name,recovery_model_desc,last_full_backup,last_diff_backup,last_tran_backup,days_since_full_backup,days_since_diff_backup,&hours_since_tranlog_backup,full_backup_location,diff_backup_location,tlog_backup_location)
mapofrows[count] = a

/// add after the loop 

for _, jointstring := range mapofrows {
    err := write.Write(jointstring)
    checkError("Cannot write to file", err)
}

Hope that does it.

Many thanks for the help. Ill have a play but as I’m a complete novice I don’t really know how to manipulate my code to accept the suggest changes, thought you made good notes so well see. cheers J.

cannot use jointstring (type string) as type []string in argument to writer.Write

checkError was just a function to handle errors:

func checkError(message string, err error) {
    if err != nil {
        log.Fatal(message, err)
    }
}

For write you need to import “os” and “encoding/csv” , sorry forgot to mention that, my IDE handles the imports automatically .

So like this :slight_smile: ( Made some small modifications to the previous code, let me know if this works. )

package main

import (
	"fmt"
	"os"
	"log"
	"encoding/csv"
	"context"
	
)

func checkError(message string, err error) {
	if err != nil {
		log.Fatal(message, err)
	}
}

func ReadSQL() (int, error) {
	ctx := context.Background()
	// Check if database is alive.
	err := db.PingContext(ctx)
	if err != nil {
		return -1, err
	}
	
	sql := " WITH [MostRecentBackupStatus_CTE] "
	sql += " AS "
	sql += " ( "
	sql += " SELECT bsfull.[server_name] , "
	sql += " bsfull.[database_name] , "
	sql += " bsfull.[backup_finish_date] AS [last_full_backup] , "
	sql += " bsdiff.[backup_finish_date] AS [last_diff_backup] , "
	sql += " bstlog.[backup_finish_date] AS [last_tran_backup] , "
	sql += " DATEDIFF(dd, bsfull.[backup_finish_date], CURRENT_TIMESTAMP) AS [days_since_full_backup] , "
	sql += " DATEDIFF(dd, bsdiff.[backup_finish_date], CURRENT_TIMESTAMP) AS [days_since_diff_backup] , "
	sql += " DATEDIFF(hh, bstlog.[backup_finish_date], CURRENT_TIMESTAMP) AS [hours_since_tranlog_backup] , "
	sql += " ( SELECT [physical_device_name] "
	sql += " FROM [msdb]…[backupmediafamily] bmf "
	sql += " WHERE bmf.[media_set_id] = bsfull.[media_set_id] "
	sql += " ) AS [full_backup_location] , "
	sql += " ( SELECT [physical_device_name] "
	sql += " FROM [msdb]…[backupmediafamily] bmf "
	sql += " WHERE bmf.[media_set_id] = bsdiff.[media_set_id] "
	sql += " ) AS [diff_backup_location] , "
	sql += " ( SELECT [physical_device_name] "
	sql += " FROM [msdb]…[backupmediafamily] bmf "
	sql += " WHERE bmf.[media_set_id] = bstlog.[media_set_id] "
	sql += " ) AS [tlog_backup_location] "
	sql += " FROM [msdb]…[backupset] AS bsfull "
	sql += " LEFT JOIN [msdb]…[backupset] AS bstlog ON bstlog.[database_name] = bsfull.[database_name] "
	sql += " AND bstlog.[server_name] = bsfull.[server_name] "
	sql += " AND bstlog.[type] = ‘L’ "
	sql += " AND bstlog.[backup_finish_date] = ( (SELECT MAX([backup_finish_date]) "
	sql += " FROM [msdb]…[backupset] b2 "
	sql += " WHERE b2.[database_name] = bsfull.[database_name] "
	sql += " AND b2.[server_name] = bsfull.[server_name] "
	sql += " AND b2.[type] = ‘L’) ) "
	sql += " LEFT JOIN [msdb]…[backupset] AS bsdiff ON bsdiff.[database_name] = bsfull.[database_name] "
	sql += " AND bsdiff.[server_name] = bsfull.[server_name] "
	sql += " AND bsdiff.[type] = ‘I’ "
	sql += " AND bsdiff.[backup_finish_date] = ( (SELECT MAX([backup_finish_date]) "
	sql += " FROM [msdb]…[backupset] b2 "
	sql += " WHERE b2.[database_name] = bsfull.[database_name] "
	sql += " AND b2.[server_name] = bsfull.[server_name] "
	sql += " AND b2.[type] = N’I’) ) "
	sql += " WHERE bsfull.[type] = N’D’ "
	sql += " AND bsfull.[backup_finish_date] = ( (SELECT MAX([backup_finish_date]) "
	sql += " FROM [msdb]…[backupset] b2 "
	sql += " WHERE b2.[database_name] = bsfull.[database_name] "
	sql += " AND b2.[server_name] = bsfull.[server_name] "
	sql += " AND b2.[type] = N’D’) ) "
	sql += " AND EXISTS ( SELECT [name] "
	sql += " FROM [master].[sys].[databases] "
	sql += " WHERE [name] = bsfull.[database_name] ) "
	sql += " AND bsfull.[database_name] <> N’tempdb’ "
	sql += " ) "
	sql += " SELECT c.[server_name] , "
	sql += " c.[database_name] , "
	sql += " d.[recovery_model_desc] , "
	sql += " COALESCE(convert(varchar,c.[last_full_backup],121),‘N/A’) as last_full_backup, "
	sql += " COALESCE(convert(varchar,c.[last_diff_backup],121),‘N/A’) as last_diff_backup, "
	sql += " COALESCE(convert(varchar,c.[last_tran_backup],121),‘N/A’) as last_tran_backup, "
	sql += " COALESCE(convert(varchar,c.[days_since_full_backup]),‘N/A’) as days_since_full_backup, "
	sql += " COALESCE(convert(varchar,c.[days_since_diff_backup]),‘N/A’) as days_since_diff_backup, "
	sql += " COALESCE(convert(varchar,c.[hours_since_tranlog_backup]),‘N/A’) as hours_since_tranlog_backup, "
	sql += " isnull(c.[full_backup_location],‘Not Created’) as full_backup_location, "
	sql += " isnull([diff_backup_location],‘Not Created’) as diff_backup_location, "
	sql += " isnull([tlog_backup_location],‘Not Created’) as tlog_backup_location "
	sql += " FROM [MostRecentBackupStatus_CTE] c "
	sql += " INNER JOIN [master].[sys].[databases] d ON c.[database_name] = d.[name]; "
	
	var sliceofrows []string
	
	tsql := fmt.Sprintf(sql)

	// Execute query
	rows, err := db.QueryContext(ctx, tsql)
	if err != nil {
		return -1, err
	}

	defer rows.Close()

	var count int

	// Iterate through the result set.
	for rows.Next() {
	var server_name string
	var database_name string
	var recovery_model_desc string
	var last_full_backup string
	var last_diff_backup string
	var last_tran_backup string
	var days_since_full_backup string
	var days_since_diff_backup string
	var hours_since_tranlog_backup string
	var full_backup_location string
	var diff_backup_location string
	var tlog_backup_location string
err :=rows.Scan(&server_name,&database_name,&recovery_model_desc,&last_full_backup,&last_diff_backup,&last_tran_backup,&days_since_full_backup,&days_since_diff_backup,&hours_since_tranlog_backup,&full_backup_location,&diff_backup_location,&tlog_backup_location)
	if err != nil {
		return -1, err
	}
	log.Println(server_name,database_name,recovery_model_desc,last_full_backup,last_diff_backup,last_tran_backup,days_since_full_backup,days_since_diff_backup,&hours_since_tranlog_backup,full_backup_location,diff_backup_location,tlog_backup_location)

a:=fmt.Sprintf( server_name,database_name,recovery_model_desc,last_full_backup,last_diff_backup,last_tran_backup,days_since_full_backup,days_since_diff_backup,&amp;hours_since_tranlog_backup,full_backup_location,diff_backup_location,tlog_backup_location)
		sliceofrows=append(sliceofrows,a)

		count++
	}

	file, err := os.Create("result.csv")
	checkError("Cannot create file", err)
	defer file.Close()
	writer := csv.NewWriter(file)
	defer writer.Flush()
	err = writer.Write(sliceofrows)
	checkError("Cannot write to file", err)

	return count, nil
}

func main(){
	
		
}

Hi I got it to run through more. But got this error.

cannot use jointstring (type string) as type []string in argument to writer.Write.

Ill look at your code now, cheers.

I added my db connection to the code, as that was obviously not there, and it runs through with no errors, but doesn’t create the CSV or flash the print to the screen. As you have shown. Not sure how to get all code to show in yellow as you have done? I can paste where I am up too if shown. cheers again.

Do you call ReadSQL() from your main function ? I did not put that in the code, since I didn’t know what else you are doing and how you did the setup for the db connection.
You get the code in yellow by surrounding it with ```` at the top and bottom.

cool. really appreciate this.
What I am trying to do is read info from a database on 3 test servers, so pass the connection in on a config file, then the idea is to iterate via the 3 servers, and when using the print to screen it works, the issue was trying to get it all to go to the csv. I got it to print a csv when I first started, but it only creates the csv for the last DB is saw, not all for them, so since I added the row scan to get the data I’ve been struggling.


import (
		"encoding/json"
		"fmt"
		"io/ioutil"
		"os"
        "strconv"
        _ "github.com/denisenkom/go-mssqldb"
        "database/sql"
        "context"
        "log"
		"encoding/csv"
)

var db *sql.DB

// Databases struct which contains
// an array of databases
type Databases struct {
 Databases []Database `json:"databases"`
}

// Database struct which contains
// all connection details for target database
type Database struct {
 Username   string `json:"Username"`
 Password   string `json:"Password"`
 Server     string `json:"Server"`
 Port       int    `json:"Port"`
 Database   string `json:"Database"`
}

func main() {

        var err error

 // Open our jsonFile
 jsonFile, err := os.Open("targetproperties2.cfg")
 // if we os.Open returns an error then handle it
 if err != nil {
        fmt.Println(err)
 }
 fmt.Println("Successfully Opened targetproperties.cfg")
 // defer the closing of our jsonFile so that we can parse it later on
 defer jsonFile.Close()

 byteValue, _ := ioutil.ReadAll(jsonFile)

        // we initialize our Databases array
        var databases Databases

 // we unmarshal our byteArray which contains our
 // jsonFile's content into 'databases' which we defined above
 json.Unmarshal(byteValue, &databases)

        // we iterate through every database within our databases array and
        // print out the connection details

        for i := 0; i < len(databases.Databases); i++ {
         fmt.Println("Server: " + databases.Databases[i].Server)
         //fmt.Println("Port: " + strconv.Itoa(databases.Databases[i].Port))
                fmt.Println("Port: " + strconv.Itoa(databases.Databases[i].Port))
         //fmt.Println("Port: " + databases.Databases[i].Port)
         fmt.Println("Database: " + databases.Databases[i].Database)
         fmt.Println("User Name: " + databases.Databases[i].Username)
         fmt.Println("Password: " + databases.Databases[i].Password)

                // Build connection string
                //connString := fmt.Sprintf("server=%s;user id=%s;password=%s;port=%d;database=%s;",
                //    server, user, password, port, database)
                connString := fmt.Sprintf("server=%s;user id=%s;password=%s;port=%d;database=%s;",
                    databases.Databases[i].Server, databases.Databases[i].Username, databases.Databases[i].Password, databases.Databases[i].Port, databases.Databases[i].Database)

                // Create connection pool
                db, err = sql.Open("sqlserver", connString)
                if err != nil {
                    log.Fatal("Error creating connection pool: ", err.Error())
                }
                ctx := context.Background()
                err = db.PingContext(ctx)
                if err != nil {
                    log.Fatal(err.Error())
                }
                fmt.Printf("Connected!\n")


                // Read backup history
                count, err := ReadBackupHistory()
                if err != nil {
                    log.Fatal("Error reading backup history: ", err.Error())
                }
                fmt.Printf("Read %d row(s) successfully.\n", count)

        }
}

// ReadBackupHistory reads database backup information for specified connection
  
func ReadBackupHistory() (int, error) {
    ctx := context.Background()

    // Check if database is alive.
    err := db.PingContext(ctx)
    if err != nil {
        return -1, err
    }
	
sql := " WITH [MostRecentBackupStatus_CTE] "
sql += " AS "
sql += " ( "
sql += "     SELECT  bsfull.[server_name] , "
sql += "             bsfull.[database_name] , "
sql += "             bsfull.[backup_finish_date] AS  [last_full_backup] , "
sql += "             bsdiff.[backup_finish_date] AS [last_diff_backup] , "
sql += "             bstlog.[backup_finish_date] AS [last_tran_backup] , "
sql += "             DATEDIFF(dd, bsfull.[backup_finish_date], CURRENT_TIMESTAMP) AS [days_since_full_backup] , "
sql += "             DATEDIFF(dd, bsdiff.[backup_finish_date], CURRENT_TIMESTAMP) AS [days_since_diff_backup] , "
sql += "             DATEDIFF(hh, bstlog.[backup_finish_date], CURRENT_TIMESTAMP) AS [hours_since_tranlog_backup] , "
sql += "             ( SELECT    [physical_device_name] "
sql += "               FROM      [msdb]..[backupmediafamily] bmf "
sql += "               WHERE     bmf.[media_set_id] = bsfull.[media_set_id] "
sql += "             ) AS [full_backup_location] , "
sql += "             ( SELECT    [physical_device_name] "
sql += "              FROM      [msdb]..[backupmediafamily] bmf "
sql += "               WHERE     bmf.[media_set_id] = bsdiff.[media_set_id] "
sql += "             ) AS [diff_backup_location] ,  "
sql += "             ( SELECT    [physical_device_name] "
sql += "               FROM      [msdb]..[backupmediafamily] bmf "
sql += "               WHERE     bmf.[media_set_id] = bstlog.[media_set_id] "
sql += "             ) AS [tlog_backup_location] "
sql += "    FROM    [msdb]..[backupset] AS bsfull "
sql += "             LEFT JOIN [msdb]..[backupset] AS bstlog ON bstlog.[database_name] = bsfull.[database_name] "
sql += "                                                       AND bstlog.[server_name] = bsfull.[server_name] "
sql += "                                                        AND bstlog.[type] = 'L' "
sql += "                                                        AND bstlog.[backup_finish_date] = ( (SELECT  MAX([backup_finish_date]) "
sql += "                                                                                            FROM    [msdb]..[backupset] b2 "
sql += "                                                                                             WHERE   b2.[database_name] = bsfull.[database_name] "
sql += "                                                                                                     AND b2.[server_name] = bsfull.[server_name] "
sql += "                                                                                                     AND b2.[type] = 'L') ) "
sql += "             LEFT JOIN [msdb]..[backupset] AS bsdiff ON bsdiff.[database_name] = bsfull.[database_name] "
sql += "                                                        AND bsdiff.[server_name] = bsfull.[server_name] "
sql += "                                                        AND bsdiff.[type] = 'I' "
sql += "                                                        AND bsdiff.[backup_finish_date] = ( (SELECT  MAX([backup_finish_date]) "
sql += "                                                                                             FROM    [msdb]..[backupset] b2 "
sql += "                                                                                             WHERE   b2.[database_name] = bsfull.[database_name] "
sql += "                                                                                                     AND b2.[server_name] = bsfull.[server_name] "
sql += "                                                                                                     AND b2.[type] = N'I') ) " 
sql += "     WHERE   bsfull.[type] = N'D' "
sql += "             AND bsfull.[backup_finish_date] = ( (SELECT MAX([backup_finish_date]) "
sql += "                                                  FROM   [msdb]..[backupset] b2 "
sql += "                                                  WHERE  b2.[database_name] = bsfull.[database_name] "
sql += "                                                         AND b2.[server_name] = bsfull.[server_name] "
sql += "                                                         AND b2.[type] = N'D') ) "
sql += "             AND EXISTS ( SELECT [name] "
sql += "                          FROM   [master].[sys].[databases] "
sql += "                          WHERE  [name] = bsfull.[database_name] ) "
sql += "             AND bsfull.[database_name] <> N'tempdb' "
sql += " ) "
sql += " SELECT  c.[server_name] , "
sql += "         c.[database_name] , "
sql += "         d.[recovery_model_desc] , "
sql += "   COALESCE(convert(varchar,c.[last_full_backup],121),'N/A')  as last_full_backup, "
sql += "   COALESCE(convert(varchar,c.[last_diff_backup],121),'N/A')   as last_diff_backup, "
sql += "   COALESCE(convert(varchar,c.[last_tran_backup],121),'N/A')   as last_tran_backup, " 
sql += "   COALESCE(convert(varchar,c.[days_since_full_backup]),'N/A') as days_since_full_backup, "
sql += "   COALESCE(convert(varchar,c.[days_since_diff_backup]),'N/A') as days_since_diff_backup, "
sql += "   COALESCE(convert(varchar,c.[hours_since_tranlog_backup]),'N/A') as hours_since_tranlog_backup, "
sql += "        isnull(c.[full_backup_location],'Not Created') as full_backup_location, "
sql += "        isnull([diff_backup_location],'Not Created') as diff_backup_location, "
sql += "        isnull([tlog_backup_location],'Not Created') as tlog_backup_location "
sql += " FROM    [MostRecentBackupStatus_CTE] c "
sql += "         INNER JOIN [master].[sys].[databases] d ON c.[database_name] = d.[name]; "

    var mapofrows []string
	tsql := fmt.Sprintf(sql)

    // Execute query
    rows, err := db.QueryContext(ctx, tsql)
    if err != nil {
        return -1, err
    }
    defer rows.Close()

    var count int
//  Iterate through the result set.
//    mapofrows := make(map[int]string)
    for rows.Next() {
	var server_name string
	var database_name string
	var recovery_model_desc string
	var last_full_backup string
	var last_diff_backup string
	var last_tran_backup string
	var days_since_full_backup string
	var days_since_diff_backup string
	var hours_since_tranlog_backup string
	var full_backup_location string
	var diff_backup_location string
    var tlog_backup_location string
	

        // Get values from row.
        err := rows.Scan(&server_name,&database_name,&recovery_model_desc,&last_full_backup,&last_diff_backup,&last_tran_backup,&days_since_full_backup,&days_since_diff_backup,&hours_since_tranlog_backup,&full_backup_location,&diff_backup_location,&tlog_backup_location)
        if err != nil {
            return -1, err
        }
        
		
		a:=fmt.Sprintf(server_name,database_name,recovery_model_desc,last_full_backup,last_diff_backup,last_tran_backup,days_since_full_backup,days_since_diff_backup,hours_since_tranlog_backup,full_backup_location,diff_backup_location,tlog_backup_location)
        mapofrows=append(mapofrows,a)
		
    count++   
		
    
	file, err := os.Create("result.csv")
	checkError("Cannot create file", err)
	defer file.Close()
	writer := csv.NewWriter(file)
	defer writer.Flush()
	err = writer.Write(mapofrows)
	checkError("Cannot write to file", err)
}
	//count++

    return count, nil
	
}

func checkError(message string, err error) {
    if err != nil {
        log.Fatal(message, err)
    }
}````

The targetproperties2.cfg file just has the database name, port, userID and password.

It created the excel file, but only details form the last database it saw, and the formatting is a bit mental lol

each line had lots of string= comments, and it was all in one cell.

SERVER03%!(EXTRA string=Logship, string=FULL, string=2018-11-25 11:52:19.000, string=N/A, string=2018-11-27 08:30:01.000, string=33, string=N/A, string=748, string=\server01\backup\Logship.bak, string=Not Created, string=c:\Backup\Logship_20181127163001.trn)

I added the log.Println line so I can see the output, and it goes though the 3 db’s but only dumps lines of data from the last db it sees to the csv. so progress thanks.

1 Like

Like i said, that was the quick and dirty solution. In order to make a nice csv/excel file you could create a slices of slices and then write them nicely to the csv file. To get the result set for all databases, i would create a file for each of them, by checking the dbname and then creating a file accordingly. Here is a nice explanation of how to create the slices and write the file. https://appliedgo.net/spreadsheet/ . It only writes the data from the last db, because it overwrites the previous data. Just create three slices like var db1 []string, var db2 []string, etc. And then check before appending to them which db it is. Then create a file for each of them or join them and then write the file.