I have created a function to generate a unique referral code for a user when they sign up, I want to ensure uniqueness so I check if it already exists, if it does then I call the function again recursively:
public function generateUniqueReferralCode()
{
$referral_code = str_random(8);
if(User::where('referral_code', $referral_code)->exists()) {
$referral_code = $this->generateUniqueReferralCode();
}
return $referral_code;
}
My question is, is this computationally expensive? Can it be done in a more efficient way as it has to scan the user table? Lets say we have 1 million users, it will check against 1 million user records if the key already exists.
PHP functions are pretty costly. So I think the following is a little faster (didn't benchmark):
public function generateUniqueReferralCode() {
$referral_code = str_random(8);
while (User::where('referral_code', $referral_code)->exists()) {
$referral_code = str_random(8);
}
return $referral_code;
}
My approach would be a little simpler. Instead of checking all those records for uniqueness, I'll rather generate a random key and plant the primary key of the last record or the record to be generated.
For instance, here's my flow of thoughts
1234abc
3
1234abc3
( will always be unqiue )No, a database uses efficient indexing (search trees or hash codes) for efficient lookups, so that the number of records is virtually immaterial.
But why don't you just increment a counter to implicitly guarantee uniqueness ? (And add random salt if you want.)