通道并发保证

I'm writing a concurrency-safe memo:

package mu

import (
    "sync"
)

// Func represents a memoizable function, operating on a string key, to use with a Mu
type Func func(key string) interface{}

// Mu is a cache that memoizes results of an expensive computation
//
// It has a traditional implementation using mutexes.
type Mu struct {
    // guards done
    mu   sync.RWMutex
    done map[string]chan bool
    memo map[string]interface{}
    f    Func
}

// Get a string key if it exists, otherwise computes the value and caches it.
//
// Returns the value and whether or not the key existed.
func (c *Mu) Get(key string) (interface{}, bool) {
    c.mu.RLock()
    _, ok := c.done[key]
    c.mu.RUnlock()
    if ok {
        return c.get(key), true
    }

    c.mu.Lock()
    _, ok = c.done[key]
    if ok {
        c.mu.Unlock()
    } else {
        c.done[key] = make(chan bool)
        c.mu.Unlock()

        v := c.f(key)
        c.memo[key] = v

        close(c.done[key])
    }
    return c.get(key), ok
}

// get returns the value of key, blocking on an existing computation
func (c *Mu) get(key string) interface{} {
    <-c.done[key]
    v, _ := c.memo[key]
    return v
}

As you can see, there's a mutex guarding the done field, which is used to signal to other goroutines that a computation for a key is pending or done. This avoids duplicate computations (calls to c.f(key)) for the same key.

My question is around the guarantees of this code; by ensuring that the computing goroutine closes the channel after it writes to c.memo, does this guarantee that other goroutines that access c.memo[key] after a blocking call to <-c.done[key] are guaranteed to see the result of the computation?

The short answer is yes.

We can simplify some of the code to get to the essence of why. Consider your Mu struct:

type Mu struct {
    memo int
    done chan bool
}

We can now define 2 functions, compute and read

func compute(r *Mu) {
    time.Sleep(2 * time.Second)
    r.memo = 42
    close(r.done)
}

func read(r *Mu) {
    <-r.done
    fmt.Println("Read value: ", r.memo)
}

Here, compute is a computationally heavy task (which we can simulate by sleeping for some time)

Now, in the main function, we start a new compute go routine, along with starting some read go routines at regular intervals:

func main() {
    r := &Mu{}
    r.done = make(chan bool)
    go compute(r)

    // this one starts immediately
    go read(r)
    time.Sleep(time.Second)

    // this one starts in the middle of computation
    go read(r)
    time.Sleep(2*time.Second)

    // this one starts after the computation is complete
    go read(r)

    // This is to prevent the program from terminating immediately
    time.Sleep(3 * time.Second)
}

In all three cases, we print out the result of the compute task.

Working code here

When you "close" a channel in go, all statements which wait for the result of the channel (including statements that are executed after it's closed) will block. So provided that the only place that the channel is being closed from is the place where the memo value is computed, you will have that guarantee.

The only place where you should be careful, is to make sure that this channel isn't closed anywhere else in your code.