int64中的位掩码多个值

I'm using https://github.com/coocood/freecache to cache database results, but currently I need to dump bigger chunks on every delete, which costs multiple microseconds extra compared to targeted deletion. fmt.Sprintf("%d_%d_%d") for a pattern like #SUBJECT_#ID1_#ID2 also costs multiple microseconds. Even tho that doesn't sound like much, in the current ratio of the cache's response time that a multitude slower than it currently is.

I was thinking of using the library's SetInt/GetInt which works with int64 keys instead of strings.

So let's say I'm storing in a #SUBJECT_#ID1_#ID2 pattern. The Subject is a table or query-segment-range in my code (e.a. everything concern ACL or Productfiltering).

Let's take an example of Userright.id is #ID1 and User.id is #ID2 and Subject ACL. I would build it as something like this:

// const CACHE_SUBJECT_ACL = 0x1
// var userrightID int64 = 0x1
// var userID int64 = 0x1
var storeKey int64 = 0x1000000101

fmt.Println("Range: ", storeKey&0xff)
fmt.Println("ID1  : ", storeKey&0xfffffff00-0xff)
fmt.Println("ID2  : ", storeKey&0x1fffffff00000000-0xfffffffff)

How can I compile the CACHE_SUBJECT_ACL/userrightID/userID into the storeKey?

I know I can call userrightID 0x100000001, but it's a dynamic value so I'm not sure what's the best way to compile this without causing more overhead than formatting the string as a key.

The idea is that in a later state when I need to flush the cache I can call a small range of int64 calls instead of just dumping a whole partition (of maybe thousands of entries).

I was thinking of adding them to each other with bit shifting, like userID<<8, but I'm not sure if that's the safe route.

If I failed to supply enough information, please ask.

Packing numbers to an int64

If we can make sure the numbers we want to pack are not negative and they fit into the bit range we're reserving for them, then yes, this is a safe and efficient way to pack them.

An int64 has 64 bits, that's how many we can assign to the parts we want to pack into it. Often the sign bit is not used to avoid confusion, or an unsigned version uint64 is used.

For example, if we reserve 8 bits for subject, that leaves 64-8=56 bits for the rest, 28 bits for each ID.

                   | ID2         | ID1         |SUB|
Encoded key bits:  |f f f f f f f|f f f f f f f|f f|

Note that when encoding, it's recommended to also use a bitmask with bitwise AND to make sure the numbers we pack do not overlap (arguable, because if the components are bigger, we're screwed anyway...).

Also note that if we're also using the sign bit (63th), we have to apply masking after the bitshift when decoding, as shifting right "brings in" the sign bit and not 0 (sign bit is 1 in case of negative numbers).

Since we used 28 bits for both ID1 and ID2, we can use the same mask for both IDs:

Use these short utility functions which get the job done:

const (
    maskSubj = 0xff
    maskId   = 0xfffffff
)

func encode(subj, id1, id2 int64) int64 {
    return subj&maskSubj | (id1&maskId)<<8 | (id2&maskId)<<36
}

func decode(key int64) (sub, id1, id2 int64) {
    return key & maskSubj, (key >> 8) & maskId, (key >> 36) & maskId
}

Testing it:

key := encode(0x01, 0x02, 0x04)
fmt.Printf("%016x
", key)
fmt.Println(decode(key))

Output (try it on the Go Playground):

0000004000000201
1 2 4

Sticking to string

Originally you explored packing into an int64 because fmt.Sprintf() was slow. Note that Sprintf() uses a format string, and it takes time to parse the format string and format the arguments according to the "rules" laid out in the format string.

But in your case we don't need this. We can simply get what you originally wanted like this:

id2, id1, subj := 0x04, 0x02, 0x01
key := fmt.Sprint(id2, "_", id1, "_", subj)
fmt.Println(key)

Output:

4_2_1

This one will be significantly faster as it doesn't have to process a format string, it will just concatenate the arguments.

We can even do better; if none of 2 arguments being next to each other are string values, a space is automatically inserted, so it's really enough to just list the numbers:

key = fmt.Sprint(id2, id1, subj)
fmt.Println(key)

Output:

4 2 1

Try these on the Go Playground.

Utilizing fmt.AppendInt()

We can improve it further by using fmt.AppendInt(). This function appends the textual representation of an integer to a byte slice. We can use base 16 so we'll have more compact representation, and also because the algorithm to convert a number to base 16 is faster than to base 10:

func encode(subj, id1, id2 int64) string {
    b := make([]byte, 0, 20)

    b = strconv.AppendInt(b, id2, 16)
    b = append(b, '_')
    b = strconv.AppendInt(b, id1, 16)
    b = append(b, '_')
    b = strconv.AppendInt(b, subj, 16)

    return string(b)
}

Testing it:

id2, id1, subj := int64(0x04), int64(0x02), int64(0x01)
key := encode(subj, id1, id2)
fmt.Println(key)

Output (try it on the Go Playground):

4_2_1

Seemed to have figured it out:

const CacheSubjectACL = 1
var userrightID int64 = 8
var userID int64 = 2
storeKey := CacheSubjectACL + (userrightID << 8) + (userID << 36)

fmt.Println("storeKey: ", storeKey)
fmt.Println("Range   : ", storeKey&0xff)
fmt.Println("ID1     : ", storeKey&0xfffffff00>>8)
fmt.Println("ID2     : ", storeKey&0x1ffffff000000000>>36)

Gives:

 storeKey:  137438955521
 Range   :  1
 ID1     :  8
 ID2     :  2

storeKey builds the int64 masked. And the masking with a new shift the other way around fishes the old values out of the int64 again.

Because storeKey&0x1ffffff000000000>>36 runs to the end anyway, storeKey>>36 will suffice too as there are no bits on the further left.