Instead of having a single go routine per host, split even further . Have 10, 20, 40, whatever number of “workers” and send them a job item via channel from the main loop. That Job Item would contain host and port to scan.
I just realise, that you currently do not even do a go routine per host, but you start concurrency go routines, each doing a full scan of the same host.
The following runs within ~5 seconds on my machine:
package main
import (
"flag"
"fmt"
"net"
"os"
"time"
)
type portResult struct {
port int
open bool
}
func main() {
var hostName string
var concurrency int
flag.StringVar(&hostName, "i", "", "Give hostname or IP")
flag.IntVar(&concurrency, "c", 10, "set number of concurrent scans")
flag.Parse()
var chIn chan int = make(chan int)
var chOut chan portResult = make(chan portResult)
for i := 0; i < concurrency; i++ {
go runner(i, hostName, chIn, chOut)
}
go func() {
for p := 1; p <= 0xffff; p++ {
chIn <- p
}
}()
for r := 1; r <= 0xffff; r++ {
info := <-chOut
if info.open {
fmt.Printf("Port %v is open\n", info.port)
}
}
close(chIn)
close(chOut)
}
func runner(id int, host string, ports <-chan int, results chan<- portResult) {
for p := range ports {
fmt.Fprintf(os.Stderr, "%d: scanning port %v\n", id, p)
address := fmt.Sprintf("%s:%d", host, p)
conn, err := net.DialTimeout("tcp", address, time.Minute)
if err != nil {
results <- portResult{
port: p,
open: false,
}
continue
}
results <- portResult{
port: p,
open: true,
}
conn.Close()
}
}
$ time go run . -i localhost -c 100 | tee ports.txt
[…]
go run . -i localhost -c 100 4.81s user 5.32s system 269% cpu 3.755 total
tee ports.txt 0.00s user 0.00s system 0% cpu 3.755 total
It results on 24 lines beeing printed to stdout, this corresponds with the number of services running on my machine and beeing bound to the loopback interface.
Please be also aware, that on my machine, there was no noticeable speed up after a concurrency of 4. Wall clock time was between 4.5 and 5 seconds for 4, 5, 10 and 100.