API库中的后台获取

I'm writing an API client (library) that hits a JSON end-point and populates an in-memory cache.

Thus far:

  • I kick off a time.Ticker loop in the library's init() function that hits the API every minute, which refreshes the cache (a struct that embeds the JSON struct and a timestamp).
  • The public facing function calls in the library just fetch from the catch and therefore don't need to worry about rate-limiting on their own part, but can check the timestamp if they want to confirm the freshness of the data

However, starting a time.Ticker in init() does not feel quite right: I haven't seen any other libs do this. I do however want to avoid the package user having to do a ton of work just to get data back from few JSON endpoints.

My public API looks like this:

// Example usage:
// rt := api.NewRT()
// err := rt.GetLatest
// tmpl.ExecuteTemplate(w, "my_page.tmpl", M{"results": rt.Data})

func (rt *RealTime) GetLatest() error {
    rt = realtimeCache.Cached
    if rt == nil {
        return errors.New("No cached response is available.")
    }

    return nil
}

And the internal fetcher is as below:

func fetchLatest() error {
    log.Println("Fetching latest RT results.")
    resp, err := http.Get(realtimeEndpoint)
    if err != nil {
        return err
    }
    defer resp.Body.Close()

    body, err := ioutil.ReadAll(resp.Body)
    if err != nil {
        return err
    }

    // Lock our cache from writes
    realtimeCache.Lock()
    defer realtimeCache.Unlock()

    var rt *RealTime
    err = json.Unmarshal(body, &rt)
    if err != nil {
        return err
    }

    // Update the cache
    realtimeCache.Cached = rt

    return nil
}

func init() {
    // Populate the cache on start-up
    fetchLatest()
    fetchHistorical()

    // Refresh the cache every minute (default)
    ticker := time.NewTicker(time.Second * interval)
    go func() {
        for _ = range ticker.C {
            fetchLatest()
            fetchHistorical()
        }
    }()
}

There are similar functions for other parts of the API (which I'm modularising, but I've kept it simple to start with), but this is the gist of it.

Is there a better way to have a background worker fetch results that's still user-friendly?

Like Elwinar said, starting the timer in init is a bad idea, however you have a constructor, so any "object construction" should happen in it, here's a short example :

(check the playground for the full code)

func NewRT(interval int) (rt *realTime) {
    rt = &realTime{
        tk: time.NewTicker(time.Second * time.Duration(interval)),
    }
    go func() {
        rt.fetch()
        for _ = range rt.tk.C {
            rt.fetch()
        }
    }()

    return
}

func (rt *realTime) fetch() {
    rt.Lock()
    defer rt.Unlock()
    rt.fetchLatest()
    rt.fetchHistory()
}

......

func (rt *realTime) GetLatest() error {
    rt.RLock()
    defer rt.RUnlock()
    if rt.cached == nil || len(rt.cached) == 0 {
        return ErrNoCachedResponse
    }

    return nil
}

func (rt *realTime) Stop() {
    rt.Lock()
    defer rt.Unlock()
    rt.tk.Stop()
}

IMHO, starting the timer on the init() function is a bad idea, for the single reason that the user of your API should be the one to decide if and when to do the fetching/caching/updating.

I would advise to make optionnal the caching and auto-updating of the data using either options in the NewRT() function or a package-wide boolean (api.AutoUpdate, api.Caching).

On the call of your accessors, you can then make the proper action:

  • Retrieve the data if caching isn't enabled
  • Check for data freshness if caching is enabled but auto-update isn't, and refresh if needed
  • Nothing if caching and auto-updating is enabled, as your timer (started in the NewRT() function) will take care of the data for you

This way you don't start retrieving anything before your user need it, but have the flexibility to let your user decide if they need additionnal functionnalities.

Note that you should ensure that unnecessary timers aren't kept after the corresponding struct have been removed.