处理OAuth响应和会话

At the end of an OAuth2 token exchange, I'm [typically] left with a JSON array of user data that I've un-marshalled into a struct (say, GoogleUser) with the fields I care about.

What is the sensible way of recording that data to my DB? Just call a CreateUser function from the callback handler, pass the struct and save it (the obvious way to me), after checking that the user doesn't already exist in the DB?

I assume I should then create a session token (i.e. session.Values["authenticated"] == true) in the callback handler, store that in a cookie (with a reasonable expiry date) and simply just check for if authenticated == true on any handler functions that expect a logged-in user? Or, for admin handlers: if admin_user == true. What are the risks here (if any) presuming I'm talking over HTTPS and using secure cookies?

Apologies for the basic questions: just trying to get a grip on "best practice" ways to log users in w/ OAuth.

With regards to your first question, It's usually recommended to do the check and insert in a single transaction. It depends on what DB you're using, but these are usually referred to as UPSERT statements. In PLSQL it looks a bit like this (modify to taste):

CREATE FUNCTION upsert_user(emailv character varying, saltv character varying, hashv character varying, date_createdv timestamp without time zone) RETURNS void
    LANGUAGE plpgsql
AS $$;
BEGIN
    LOOP
        -- first try to update the key
        UPDATE users SET (salt, hash) = (saltv, hashv) WHERE email = emailv;
        IF found THEN
            RETURN;
        END IF;
        -- not there, so try to insert the key
        -- if someone else inserts the same key concurrently,
        -- we could get a unique-key failure
        BEGIN
            INSERT INTO users(email, salt, hash, date_created) VALUES (emailv, saltv, hashv, date_createdv);
            RETURN;
        EXCEPTION WHEN unique_violation THEN
            -- do nothing, and loop to try the UPDATE again
        END;
    END LOOP;
END;
$$;

In regards to your second question, usually Secure cookies over HTTPS is enough. I'd set the HttpOnly option, and usually the Path option as well.

HttpOnly means that the cookie can't be accessed by JS (only HTTP or HTTPS), and the Path option allows you to specify what path (in the URL) the cookie is valid for.

The Access Token in OAuth standard have a expiry. It's usually determined by authorization server. In your case I assume you are on authorization server side.

Read RFC 6750 for example:

Typically, a bearer token is returned to the client as part of anOAuth 2.0 [RFC6749] access token response. An example of such a response is:

 HTTP/1.1 200 OK
 Content-Type: application/json;charset=UTF-8
 Cache-Control: no-store
 Pragma: no-cache

 {
   "access_token":"mF_9.B5f-4.1JqM",
   "token_type":"Bearer",
   "expires_in":3600,
   "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA"
 }

Also read concept of Access Token in RFC 6749:

The access token provides an abstraction layer, replacing different authorization constructs (e.g., username and password) with a single token understood by the resource server. This abstraction enables issuing access tokens more restrictive than the authorization grant used to obtain them, as well as removing the resource server's need to understand a wide range of authentication methods.

So in your case, I don't think a "cookie" or "admin handler" is needed. You only have to generate Access Token & Refresh Token for each users logged in just like OAuth spec says, and store its expiry as well. You can also provide a hash method related with Access Token to make sure it's a legal request. For example, users use their access token to generate a signature with hash & salt method, send access token & signature to server to verify. Read Public Key Encryption for more details.

Furthermore, you don't need to save these tokens into your DB because they are all temporary resources. You can also save all user informations in memory and implement a cache layer to save these informations which truly important into DB periodically(which I'm currently using now) to lower DB pressure.