不提供http时,golang客户端负载均衡器

As a golang n00b, I have a go program that reads messages into kafka, modifies them then post them to one of the http endpoints in a list.

As of now we do some really basic round robin with random

cur := rand.Int() % len(httpEndpointList)

I'd like to improve that and add weight to the endpoints based on their response time or something similar.

I've looked into libraries but all I seem to find are written to be used as middleware using http.Handle. For example see the oxy lib roundrobin

I my case I do not serve http requests per say.

Any Ideas how could I accomplish that sort of more advanced client side load balancing in my golang program ?

I'd like to avoid to use yet another haproxy or similar in my environment.

There is a very simple algorithm for weighted random selection:

package main

import (
    "fmt"
    "math/rand"
)

type Endpoint struct {
    URL    string
    Weight int
}

func RandomWeightedSelector(endpoints []Endpoint) Endpoint {
    // this first loop should be optimised so it only gets computed once
    max := 0
    for _, endpoint := range endpoints {
        max = max + endpoint.Weight
    }

    r := rand.Intn(max)
    for _, endpoint := range endpoints {
        if r < endpoint.Weight {
            return endpoint
        } else {
            r = r - endpoint.Weight
        }
    }
    // should never get to this point because r is smaller than max
    return Endpoint{}
}

func main() {
    endpoints := []Endpoint{
        {Weight: 1, URL: "https://web1.example.com"},
        {Weight: 2, URL: "https://web2.example.com"},
    }

    count1 := 0
    count2 := 0

    for i := 0; i < 100; i++ {
        switch RandomWeightedSelector(endpoints).URL {
        case "https://web1.example.com":
            count1++
        case "https://web2.example.com":
            count2++
        }
    }
    fmt.Println("Times web1: ", count1)
    fmt.Println("Times web2: ", count2)
}

In can be optimized, this is the most naive. Definitely for production you should not calculate max every time, but apart from that, this basically is the solution.

Here a more profesional and OO version, that does not recompute max everytime:

package main

import (
    "fmt"
    "math/rand"
)

type Endpoint struct {
    URL    string
    Weight int
}

type RandomWeightedSelector struct {
    max       int
    endpoints []Endpoint
}

func (rws *RandomWeightedSelector) AddEndpoint(endpoint Endpoint) {
    rws.endpoints = append(rws.endpoints, endpoint)
    rws.max += endpoint.Weight
}

func (rws *RandomWeightedSelector) Select() Endpoint {
    r := rand.Intn(rws.max)
    for _, endpoint := range rws.endpoints {
        if r < endpoint.Weight {
            return endpoint
        } else {
            r = r - endpoint.Weight
        }
    }
    // should never get to this point because r is smaller than max
    return Endpoint{}
}

func main() {
    var rws RandomWeightedSelector
    rws.AddEndpoint(Endpoint{Weight: 1, URL: "https://web1.example.com"})
    rws.AddEndpoint(Endpoint{Weight: 2, URL: "https://web2.example.com"})

    count1 := 0
    count2 := 0

    for i := 0; i < 100; i++ {
        switch rws.Select().URL {
        case "https://web1.example.com":
            count1++
        case "https://web2.example.com":
            count2++
        }
    }
    fmt.Println("Times web1: ", count1)
    fmt.Println("Times web2: ", count2)
}

For the part of updating weights based on a metric like endpoint latency, I would create a different object that uses this metrics to update the weights in the RandomWeightedSelector object. I think to implement it all together would be against single responsibility.