插入和更新中的Postgresql死锁

Below is the postgres log

Process 10396 waits for RowShareLock on relation 17204 of database 16384; blocked by process 10377.
Process 10377 waits for ShareLock on transaction 149848948; blocked by process 10396.
Process 10396: insert into "completed_jobs" ("id", ....... "limitation_code") values ($1, ...... $22) returning "id"
Process 10377: UPDATE jobs SET status='pending', updated_at=$1 WHERE id=$2

I'm executing this from go Lang. So this is inside a distributed environment.

Update is normal Execution,

    _, err = tx.Exec("UPDATE jobs SET status='pending', updated_at=$1 WHERE id=$2", time.Now().UTC(), job.Id)
    if err != nil {
        log.Println(getMessagePrefix(job, nil), "Error updating job status to pending", err)
    }
    err = tx.Commit()

Insert is inside a transaction,

tx, _ := db.Begin()
tx.Exec("UPDATE jobs SET status=$1 WHERE id=$6", status)
tx.Exec("INSERT INTO completed_jobs SELECT * FROM jobs WHERE id=$1", job.Id)
tx.Exec("DELETE FROM jobs WHERE id=$1", job.Id)
err := tx.Commit()

In job queuing I have generally used the following strategies to prevent multiple processes from trying to access the same jobs. In general you need cross-transactional controls to handle this.

  • advisory locks which are cross-transaction locking discretionary locks. This means you can exclude in-process jobs before returning them to the client. This avoids the pending status requirement as well.
  • short transactions with randomized id's in order to prevent multiple processes from likely hitting the same ids. This is also important if using an index because if you end up with long-running transacctions, you can get a lot of dead tuples at the head of an index.

Concurrency in job queue systems pose a large number of problems. But those two solve the worst of them.