s
Yes.
e
can you describe a little how you would plan to use it? the tl;dr on timeline is that we were waiting to see if there was any interest in it. if you can help us understand what you want to do that might bump it up a bit
s
So we have a postgres db that contains all of our application data. We have a bunch of triggers and views that transform that data on update/insert/delete into rows on
relation_tuple
for SpiceDB to consume. This is OK, but runs the risk of being a serious bottleneck in the application, because it slows down writes by a factor of 100 (probably more as our data grows). It also forces us to answer a lot of questions about organization, documentation, and responsible parties to ensure it's maintainable. If we could run this connector instead, it would get us away from maintaining these triggers and requiring back-end developers to know about them, and know how the presence of the triggers impacts DB schema changes.
Our DB schema is managed by a set of sqitch migrations, but I consider spicedb, and the triggers and views that transform app data into relation tuples, to be meta-objects separate from the main schema. The problem is this abstraction breaks down whenever we need to change the database schema, because the views and triggers are tightly coupled to the current shape of the data. And since triggers are synchronous this means a change to the schema could potentially result in any number of queries failing at some later time.
e
so if we did address the issue with connector-postgres, you'd be able to sync data from postgres -> spicedb async and in the background but there wouldn't be any guarantee of when exactly those tuples will get synced into spicedb with the connector. if you have any workflows that involve zedtokens it might not work for you (we have some ideas to make that possible, but we're not actively working on it) it sounds like you don't want applications to write to both spicedb and postgres? I imagine you can avoid a lot of the performance problems you see with triggers by doing that (i.e. you can bulk write relationships into spicedb instead of running a trigger per row, etc)
s
Yeah I've been trying to avoid the route of doing the write at application time. Again, for the sake of not taxing back-end developers with this added responsibility as the code evolves.
But... it does seem that I've not been able to absolve them of that by using the trigger route... and made it harder to think about in the process by abstracting it out. 😓
e
we had imagined that if you were in control of the application it would more or less always be preferable do this in a transaction something like:
Copy code
tx = db.BeginTx()
spiceClient.WriteRelationships(...)
tx.WriteData(...)
tx.Commit()
if commit_error() {
 tx.Rollback()
 spiceClient.DeleteRelationships(...)
}
that way you get feedback in the application about which thing failed
the connectors we were imagining for if you wanted to sync data from another service that wasn't fully in your control
s
If you're using automatic IDs, how does that work? You don't have the ID until after the insert.
e
you can interleave them within the transaction
s
Yeah duh, it's atomic, order doesn't matter
e
it does mean that devs need to know about spicedb when writing data
but it might be easier to grep for where access changes are happening, etc
s
Yeah that's my sticking point. Adding to required dev knowledge.
But at the expense of unnecessarily taxing the database...
e
hm, well if you care about protecting against things like the new enemy problem (https://authzed.com/blog/new-enemies/ if you haven't seen it) devs will need to know about zedtokens / spicedb, because you'll likely store the tokens alongside the data if you don't care about that for your use case, and it's okay for there to be small windows where acls/data don't line up, then the connector-postgresql approach might work well for you (since you won't have access to the revisions from the writes that the connector does anyway) and like I teased earlier, we have discussed ways to maybe do both, but they're more involved and will require a lot more discussion before we could ship it
s
If we had laser-precise triggers it would be one thing, but I abandoned that approach for a statement-level trigger that uses views to collect the data when it became clear that it would be too hard to write and maintain.
Yeah, our triggers do put the zedtoken in. I reverse engineered how to simulate a zedtoken in PL/PGSQL. I am a long ways down this rabbit hole.
e
ah interesting - so do you also have triggers that use the stored zedtokens to perform a check before reading the rest of the row?
sorry I should've asked: how do you use the stored zedtokens?
s
I believe the triggers do a zedtoken check at the trigger level and prevent the update if the data is stale but I'd have to check.
I had planned to use the zedtokens to do a preflight query at the request handler level, in order to ensure that spicedb checked the latest data.
So you have the ID, grab the zedtoken from the db, then do a permissions check with spicedb using the zedtoken
I'm gonna ruminate on this a bit.
e
it sounds like this is what you're doing already, but just for reference these docs go over storage / usage of zedtokens: https://docs.authzed.com/reference/zedtokens-and-zookies#how-do-i-use-them
4 Views