Is there any way to
# spicedb
y
Is there any way to
BatchWriteRelantionship
?
v
The
WriteRelationship
API supports batch writes of relationships already
You can add multiple
RelationshipUpdate
it's actually more powerful that just batch writes - it allows you to transactionally create, touch or delete relationships
y
kill me, I swear that I my brain didnt see that
repeated
in front of it ... now I am mad at myself for not paying close attention
v
no problem 😄
y
another question, how do I suppose to deal with Idempotency?
I searched in the docs but I couldn't find a header or something as part of the message
v
you can do idempotent writes with
TOUCH
instead of
CREATE
I'm not sure if deletes are idempotent though
yup, deletes are idempotent, just tested
y
I am away from the compu, gonna check out later
Touch vs. create 🤔
Copy code
CREATE will create the relationship only if it doesn't exist, and error otherwise.

TOUCH will upsert the relationship, and will not error if it already exists.
I see, what a weird name
TOUCH
why not call it
UPSERT
Maybe is because I do not understand it, but how is that an idempotent request? Imagine this, T1: Add Relationship -- takes too long T2: Remove Relationship -- happens faster than T1 T1: Timeout -- timeouts T1: Retry Add Relationship -- redo
What would happen in that case?
v
agreed touch is a weird name
it's idempotent becase if you run the same query multiple times, the outcome is the same
in your scenario, there could be serialization conflicts if transactions overlap, but generally most recent operation wins. SpiceDB also retries when it sees serialization errors
are you thinking of something like cooperation with the client side, like an idempotency key?
y
Right, I MUST deal with batching event processing (event sourcing events, dont matter but just sharing more), so I can use the event ID which is guarantee to be unique. If the batch fails, I have to reprocess the event, so I MUST design the system to be
at-least-once
processing, so I can have many.
In an ideal scenario, I only have ONE processor dealing with a given relationship, so maybe, maybe, it is easier because I am linearizing the problem. But that is in the perfect scenario.
v
yeah, I see your point. You don't want overlapping operations side-effects to be reordered because of client-side retries
I think you could achieve that in combination with
CREATE
. It won't be idempotent in that it does not return the same response, but would allow you to get the same effect. You could add a
CREATE
relationship that has your idempotency key, and you do that transactionally with the actual relationships. - if
WriteRelationships
fails because the idempotency-key-relationship already exists, then you know the operation was already performed
it's not ideal because it returns a different result (error) and you don't get the
zedtoken
returned by the operation, but you can make sure the operation was performed. I still think it could be useful to have some first-class support for idempotency keys so feel free to open a Github issue
y
Most definitely having first-class idempotency support is key for reliable operations around it!
v
Thanks Yordis!
We've been discussing a potential 2PC-like API that let's you prepare a write. If that API supported an idempotency key, would it work for you? e.g. 1. prepare a
WriteRelationships
call. Includes metadata for Idepotency Key 2. Stream processor attempts to commit 3. Stream processor dies before it gets commit acknowledgement 4. Stream processor is restarted and resumes from the last processed Event 5. Stream processor prepares
WriteRelationships
call with same Idepotency Key 6. Stream processor gets response the operation was already processed, including the result of committing it
y
I read 2PC and I cry automatically. Are you really sure you want that?
What about just having a response save for only 24 hours, after that the cashe is deleted for that given key
2PC will introduce a lot of overhead (as far as I can tell, maybe you have way more context than me) in the server, potentially lowering the latency and throughput a lot. I would say, 2PC if we ran out of ideas 🙂
v
I'm not going to say I'm 100% sure because I'm also not a fan of the idea, but the reality is that anything that looks remotely similar to Event-Driven Architectures/Event Sourcing/CQRS or Transaction-Log Tailing in general has a steep cost to engineering organizations and they may not be ready for it or mayt be downright unlikely to happen in the short/mid or even long term - I've been there. There are organizational and political forces at play sometimes.
I think we want to meet our customers where they are. Asking to "rearchitect your application" is an unrealistic barrier of entry.
y
too real!
I am not sure what the right solution is; honestly, I can get away to some extent because of how I architect things. I am sure more informed about the "consistency" of the data. I know I will need the feature, I just don't know when right now hehe
Your comment about asking your customers to rearchitect ... trust me, it hurts me so much that people like you have to conform to that reality it always feels a up hill battle
9 Views