使用WaitGroup和goroutine优化多线程的正确方法?

I am trying to make my application run as fast as possible. I purchased a semi-powerful container off of Google Cloud and I am just itching to see how many iterations per second I can get out of this program. However, I am new to Go and so far my implementation is showing to be very messy and not working well.

The way I have it set up now, it will start out at a high rate (around 11,000 iterations per second) but then quickly dwindle down to 2,000. My goal is for a far bigger number than even 11,000. Also, the infofunc(i) function can't seem to keep up with fast speeds and using a goroutine for that function causes overlap of the printing to the console. Also, it will on occasion reuse the WaitGroup before the Wait has returned.

I don't like to be the person to ask to be spoon-fed code, but I am at a loss as to how to implement this. There seems to be so many different methods when it comes to parallelism, multithreading, etc. and it is confusing to me.

import (
    "fmt"
    "math/big"
    "os"
    "os/exec"
    "sync"
    "time"
)

var found = 0
var pages_queried = 0
var start_time = time.Now()
var bignum = new(big.Int)
var foundAddresses = 0
var wg sync.WaitGroup
var set = make(map[string]bool)
var addresses = []string{"6ab42gyr", "lo08n4g6"}

func main() {
    bignum.SetString("1000000000000000000000000000", 10)
    pick := os.Args[1]
    kpp := 128
    switch pick {
    case "btc":
        i := new(big.Int)
        i, ok := i.SetString(os.Args[2], 10)
        if ok {
            cmd := exec.Command("clear")
            cmd.Stdout = os.Stdout
            cmd.Run()
            for i.Cmp(bignum) < 0 {
                wg.Add(1)
                go func(i *big.Int) {
                    defer wg.Done()
                    go printKeys(i.String(), kpp)
                    i.Add(i, big.NewInt(1))
                    pages_queried += 1
                    infofunc(i)
                }(i)
                wg.Wait()
            }
        }
    }
}

func infofunc(i *big.Int) {
    elapsed := time.Now().Sub(start_time)
    duration, _ := time.ParseDuration(elapsed.String())
    duration2 := int(duration.Seconds())
    if duration2 != 0 {
        fmt.Printf("\033[5;0H")
        fmt.Printf("Started at %s. Found: %d. Elapsed: %s. Queried: %d pages. Current page: %s. Rate: %d/s", start_time.String(), found, elapsed.String(), pages_queried, i.String(), (pages_queried / duration2))
    }
}

func printKeys(pageNumber string, keysPerPage int) {
    keys := generateKeys(pageNumber, keysPerPage)
    length := len(keys)
    var addressesLen = len(addresses)

    for i := 0; i < length; i++ {
        wg.Add(1)
        go func(i int) {
            defer wg.Done()
            for ii := 0; ii < addressesLen; ii++ {
                wg.Add(1)
                go func(i int, ii int, keys []key) {
                    defer wg.Done()
                    for _, v := range addresses {
                        if set[keys[i].compressed] || set[keys[i].uncompressed] {
                            fmt.Print("Found an address: " + v + "!
")
                            fmt.Printf("%v", keys[i])
                            fmt.Print("
")
                            foundAddresses += 1
                            found += 1
                        }
                    }
                }(i, ii, keys)
            }
        }(i)
        foundAddresses = 0
    }
}

I would not use a global sync.WaitGroup, it is hard to understand what is happening. Instead, just define it wherever you need.

You are calling wg.Wait() inside the loop block. That is basically blocking the loop every iteration waiting for goroutine to complete. What you really want is to spawn all the goroutines and only then wait for their completition.

if ok {
    cmd := exec.Command("clear")
    cmd.Stdout = os.Stdout
    cmd.Run()
    var wg sync.WaitGroup //I am about to spawn goroutines, I need to wait for them
    for i.Cmp(bignum) < 0 {
        wg.Add(1)
        go func(i *big.Int) {
            defer wg.Done()
            go printKeys(i.String(), kpp)
            i.Add(i, big.NewInt(1))
            pages_queried += 1
            infofunc(i)
        }(i)
    }
    wg.Wait() //Now that all goroutines are working, let's wait
}

You cannot avoid the print overlap when you have multiple goroutines. If that's a problem you might think of using the Go's log stdlib, which will add timestamps for you. Then, you should be able to sort them in chronological order.

Anyway, split the code in more goroutines does not ensure a speed up. If the problem you are trying to solve is intrinsically sequential, then more goroutines will just add more contention and pressure on Go scheduler, leading to the opposite result. More details here. Thus, a goroutine for infofunc will not help. But it can be improved by using a logger library instead of plain fmt package.

func infofunc(i *big.Int) {
    duration := time.Since(start_time).Seconds()
    if duration != 0 {
        log.Printf("\033[5;0H")
        log.Printf("Started at %s. Found: %d. Elapsed: %s. Queried: %d pages. Current page: %s. Rate: %d/s", start_time.String(), found, elapsed.String(), pages_queried, i.String(), (pages_queried / duration2))
    }
}

For printKeys, I would not create so many goroutines, they are not going to help if work they need to perform is CPU bound, which seems to be the case here.

func printKeys(pageNumber string, keysPerPage int) {
    keys := generateKeys(pageNumber, keysPerPage)
    length := len(keys)
    var addressesLen = len(addresses)
    var wg sync.WaitGroup //Local WaitGroup
    for i := 0; i < length; i++ {
        wg.Add(1)
        go func(i int) {  //This goroutine could be removed, in my opinion.
            defer wg.Done()
            for ii := 0; ii < addressesLen; ii++ {
                for _, v := range addresses {
                    if set[keys[i].compressed] || set[keys[i].uncompressed] {
                        log.Printf("Found an address: %v
", v)
                        log.Printf("%v", keys[i])
                        log.Printf("
")
                        foundAddresses += 1
                        found += 1
                    }
                }
            }
        }(i)
        foundAddresses = 0
    }
    wg.Wait()
}

I would suggest to write a benchmark on these functions and then enable tracing. In this way you should get an idea where your code is spending most of the time.