We are a PHP7 shop and have been programming in it for 12+ years. I have been playing with the idea of breaking our webserver down into serviceable units that handle the different tasks and have been looking at doing this in Go for a number of years. One of the issues I have with doing this is the huge amount of code required to work with queries in Go versus PHP. In PHP you just setup a variable to hold your query, execute it, set a row to it or loop through and each variable comes out as a named array (MAP) and you just grab what you need from the result array.
In GO you have to first define variables for each field you’re getting back that shares the type of the field in the database, setup your query string, run it, set a row back from the result and then manually set each field from the result to each variable you defined before. It seems like a lot more work, but work I’m willing to put up with if the benefits are there.
So, I setup a very simple test in my head to compare the difference between PHP and Go with relation to dealing with MYSQL. NOTE: Our database holds more than 1,000 tables so we’re not just talking about querying names and addresses.
Before I list my code I’ll state the problem: I am connecting to the database and then looping 1 million times. In each loop I am grabbing a random number which will match up to an ID in the table and then querying for it. I retrieve the row in both languages but in PHP this function actually retrieves the data and sets it to an array that I can then use. In Go this would require calling rows.Scan which I’m not doing due to not being able to compile with variables I’m not using. So I left that out. But in my mind, the Go program is doing less work than the PHP7 program.
Results: In PHP7 I am able to do my 1 million searches in 55seconds and it seems to be using around 50% CPU average. The Go app performs the same million lookups in just over a minute and never seems to use more than 30% of the CPU.
All tests done on same computer so hardware is not relevant. Also I removed the last line that fetches the row from php and it only saved 4 seconds so not very intense to get data in php. Not sure about golang as it was already slower and didn’t want to further complicate it.
#################################################
PHP Code:
$startrecord = 9086240;
$endrecord = 9644302;
$conn = new mysqli( $server, $user, $pass, "support" );
//loop 1 million times through randomly grabbed error log entries.
for ( $i = 1; $i < 1000000; $i++ ) {
$ran = rand( $startrecord, $endrecord );
$query = "select * from logErrors where ErrorLogID=$ran";
$result = $conn->query( $query );
$row = $result->fetch_assoc();
}
#################################################
Go Code:
func main() {
startrecord := 9086240;
endrecord := 9644302;
var DBConnect string
DBConnect = dbUser + ":" + dbPass + "@"
DBConnect = DBConnect + "(" + dbHost + ":" + dbPort + ")/" + dbName + "?charset=utf8&autocommit=true"
db, _ := sql.Open("mysql", DBConnect)
rand.Seed(time.Now().UnixNano())
//loop 1 million times
for i := 0; i < 1000000; i++ {
ran := rand.Intn(endrecord-startrecord) + startrecord
query := "select ErrorLogID, ErrorID from logErrors " +
"where ErrorLogID=" + strconv.Itoa(ran)
if rows, err := db.Query( query ); err != nil {
fmt.Printf( "Error: ?", err )
return
} else {
for rows.Next() {
}
}
}
}