I hope my question is suitable for this site and is not too broad but I am having trouble designing the following architecture:
I have two sites. Site 1 is responsible for transferring credits between users. Site 2 is responsible for providing these users with services/products which can be paid with the credits they own in Site 1.
Lets say I have 1000 credits on Site 1. I have a service/product which costs 50 credits on Site 2 and a user wants to purchase it with his amount of credits he owns in Site 1.
Both sites communicate with REST. So for example when a user wants to purchase a service/product Site 2 prepares its request and sends it to Site 1 which makes the transaction and confirms to Site 2 that the transaction was successful (e.g. the user had enough credits for the service/product and those credits were successfully transferred to the destination)
Now here's the tricky part. In Site 1 I have the following logic:
Begin transaction
update user set credits -= 50 where id = 1
update user set credits += 50 where id = 2
REST CALL (Site 2) Success
Site 2 response - OK, commit transaction
Commit
Since REST is a call to a different site the transaction might take some time to complete. In the mean time is the whole table locked for any transactions or are the rows for user 1 and user 2 locked? Is this the proper way to implement my logic? Am I missing something?
Thank you in advance.
This is in response to your question on Casey's answer:
Yes, as long as you do it like this:
Site 2:
Site 1
Site 2:
EITHER:
1. Receive REST success call, make purchase available for download/dispatch, display some message to user saying purchase processing complete. Update local transaction record, state=succeeded.
OR
2. Site 2 is down. Transaction success will be noted next time background polling process runs (which checks status of purchase requests awaiting responses) or next time customer logs in (in which case poll is initiated too--step 3 in first list)
If you have not received a response to a transaction, perform a GET using the transaction ID. If the response is an error, Site 1 did not receive the original request, Site 2 is free to repeat the transaction (POST) request. If the response is 'transaction failed' then the user didn't have enough credits, update transaction record on site 2 accordingly. if result is 'transaction succeeded' record that too.
if a transaction fails N number of times, or a certain period elapses since the user clicked the button (say 5 minutes) then Site 2 stops retrying the purchase.
Since you're using primary keys and not ranges, it should be row-level locking. There's also the concept of a shared vs exclusive lock. Shared locks allow other processes to still read the data while an exclusive is used in an update/delete scenario and blocks all others from reading it.
On the logic in general.. if there's really only one place storing the credits and one place reading them, how important is it to sync in realtime? Would 3, 5, or 10 seconds later be sufficient? If Site 1 is completely down, do you want to allow for Site 2 to still work?
Personally, I would restructure things a bit:
Since S1 is always returning the definitive current credit count, you don't have to worry about maintaining the credit count on S2 in a 100% accurate way.. you can just wait for it from S1.
If you're really nervous about it, you could have a job that runs every N hours that polls S1 requesting updates on every account updated during that N hours.