I have to deal with huge integers in Golang that come in from a Swagger-defined REST API. Since Swagger needs a Validate(strfmt.Registry)
, I define my custom type like this:
// BigInt is a big.Int, but includes a Validate() method for swagger
// Once created, it can be used just like a big.Int.
type BigInt struct {
*big.Int
}
Since it needs to be transformed to and from JSON, I define some JSON Marshaling interface:
// UnmarshalJSON implements encoding/json/RawMessage.UnmarshalJSON
func (b *BigInt) UnmarshalJSON(data []byte) error {
err := json.Unmarshal(data, &b.Int)
if err != nil {
return err
}
return nil
}
// MarshalJSON calls json.Marshal() on the BigInt.Int field.
func (b *BigInt) MarshalJSON() ([]byte, error) {
if b == nil {
return []byte("null"), nil
}
return json.Marshal(b.Int)
}
Now I realized that my custom type doesn't actually behave exactly like big.Int
. In order to compare two BigInts:
example := BigInt{Int: &big.Int{}}
other := BigInt{Int: &big.Int{}}
example.Cmp(other.Int)
I cannot do
example.Cmp(other)
which is much cleaner. And creating the BigInt is also a terrible experience, which I have to wrap in a function like this:
// NewBigInt creates a BigInt with its Int struct field
func NewBigInt() (i *BigInt) {
return &BigInt{Int: &big.Int{}}
}
int64/uint64/float64
?Is this really how I'm supposed to do things?
It's a way to do it, but it's not what I'd call a "renamed" type; it's a struct containing a single field. You could also do (like, for example, time.Duration
):
type BigInt *big.Int
And apply methods to it. This would allow you to seamlessly convert between *big.Int
and your type.
Why can't golang treat big.Int just like its other built in types like int64/uint64/float64?
Because unlike those types, big.Int
isn't a built-in type; you can tell because it's big.Int
, i.e., it's defined in a package, not in the language.