Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

org.postgresql.util.PSQLException: ERROR: deadlock detected #250

Open
catmeme opened this issue Oct 10, 2022 · 2 comments
Open

org.postgresql.util.PSQLException: ERROR: deadlock detected #250

catmeme opened this issue Oct 10, 2022 · 2 comments

Comments

@catmeme
Copy link

catmeme commented Oct 10, 2022

Been getting a "deadlock detected" every few days.

I'm not entirely sure what's causing it. For what it's worth, this instance is receiving international statements, not English only.

21:39:52.116 [qtp195949131-13] WARN  lrsql.util.concurrency - Rerunable Transaction exhausted attempts or could not be retried
21:39:52.127 [qtp195949131-13] ERROR c.y.lrs.pedestal.interceptor - {:msg "Unhandled LRS Error", :line 244}
org.postgresql.util.PSQLException: ERROR: deadlock detected
  Detail: Process 27183 waits for ShareLock on transaction 6055610; blocked by process 21912.
Process 21912 waits for ShareLock on transaction 6055623; blocked by process 27183.
  Hint: See server log for query details.
  Where: while updating tuple (137,12) in relation "activity"
@kelvinqian00
Copy link
Collaborator

Hello @catmeme!

The insertion of large amount of statements is known to cause deadlock under certain circumstances. What are the properties of your statement input stream? e.g. the total number of statements, statement batch size, and what your statement data looks like.

Based on what the error message says, it may be possible that the statements in your input stream share a few common activities, forcing SQL LRS to constantly and concurrently update the activity table, which would result in deadlock. See the note at the bottom of this section for more details. In short, if that is the case, the best approach is to redesign your statement data, but if that is not possible reducing statement batch sizes and adjusting the LRSQL_STMT_RETRY_LIMIT and LRSQL_STMT_RETRY_BUDGET config vars may mitigate the issue.

@catmeme
Copy link
Author

catmeme commented Oct 10, 2022

Thanks, this tracks with what I'm seeing. Our input stream was sending a much higher rate than usual because we were transferring a backlog of hundreds of thousands of records. We have some statements that had descriptions mistakenly translated, this sounds like it can be causing the deadlock when it's updating the activity's description.

I'll try playing with the configuration, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants