https://authzed.com logo
Error on DeleteRelationships: Max retries exceeded. Could not serialize access due to read/write dep
n

Nandy

03/10/2023, 8:27 AM
Hi, i am trying to deleterelationships but i wanna do in bulk. So calling the /relationships/delete api in promise.all . But it fails with an error "Max retries exceeded. Could not serialize access due to read/write dependencies among transactions" . Anyone please help how can I achieve bulk delete? I tried using authzed-node also . It gives the same error. Please help. Thanks
v

vroldanbet

03/10/2023, 9:03 AM
👋🏻any chances your could share the code you are using to delete, and how you are constructing the delete request? Are you using mysql or postgres?
n

Nandy

03/16/2023, 11:21 AM
Hi @vroldanbet Sorry to respond late. I am using this format { "relationshipFilter": {         "resourceType": "type",         "optionalResourceId":"id",         "optionalRelation":"member",         "optionalSubjectFilter":{             "subjectType":"user",             "optionalSubjectId":"user"         }     } } and i am using postgres
I am deleting in parallel for different records. when I delete few batches are succeeded , but mostly fails
v

vroldanbet

03/16/2023, 11:25 AM
right, that kind of behaviour is expected. We use PostgreSQL with serializable level. If you are doing lots of concurrent writes, PG has to "serialize" all those requests, and that might cause some writes to fail and have to be retried. Generally SpiceDB should retry writes buy perhaps we are missing something with retries in PG datastore.
do you have an idea of how many deletes you are running in parallel?
n

Nandy

03/16/2023, 11:27 AM
more than 15k records
processing through a message queue only but message queue kinda process it in batch i believe.
v

vroldanbet

03/16/2023, 11:31 AM
so you are deleting 15K records, but you don't know how many concurrent workers you are running?
n

Nandy

03/16/2023, 11:45 AM
First i tried with promise.all (), that was throwing the error
then i have tried with a message queue assuming that will pick one record at a time. but that also throwing the same error.
v

vroldanbet

03/16/2023, 11:47 AM
is it possible each write is touching 15K records at a time?
maybe you could enable debug errors and see if it should more details?
n

Nandy

03/16/2023, 11:53 AM
Sure. let me try and get more details.
Hi @vroldanbet , If i use cockroach db or MySQL would i get the similar kinda issues with serializable level? . Or bulk deletes in concurrent are allowed in those ?
v

vroldanbet

03/17/2023, 8:56 AM
Same thing
They all work at the same isolation level