I have a use case (I think) for this and I'm curious if this is an abuse or not.
We have a need to replicate data between systems consistently. So let's say you have Google Docs. The user does an atomic operation that moves the doc to a different folder and also updates other state (let's say the document's name).
The requirements are:
1. The caller needs a zedtoken (so cannot update spicedb async)
2. The folder and doc rename must update atomically (both or neither)
3. SpiceDB and another database must store this, but they can be eventually consistent.
If I just update spicedb, then my own store, I have race and failure conditions which can lead to inconsistency. SpiceDB thinks the doc is moved, but I haven't updated the name, or I overwrite a parallel rename that interleaved.
One solution would be to call SpiceDB, tell it about the folder move via relations, but attach the other updates as metadata. Then, listen to watch API and update my state locally.
1. I get the Zedtoken from spicedb
2. Both folder and name are stored atomically in spicedb
3. The folder move happens and as long as I replay watch in order and track zedtoken of last update, i should get all updates and be consistent (eventually consistent, but this is fine)
The question is I guess related to how costly it would be to add this extra annotation data.