bmullan91
05/28/2024, 10:00 AMvroldanbet
05/28/2024, 10:12 AMminimize_latency
semantics, so long the revision you provided is older than the new "optimized revision".
You could also do the same SpiceDB does here and fallback to minimize_latency when no zedtoken is found, with the assumption that a key should always exist for a resource that changed.
w.r.t to having this done by SpiceDB, we've actually discussed something like this internally but haven't opened a proposal as we are still not 100% convinced this is the way to go. It would be akin to "named zedtokens", and would mean SpiceDB would store a name->zedtoken list so you can reference it in your requests.vroldanbet
05/28/2024, 10:12 AMbmullan91
05/28/2024, 10:24 AMbmullan91
05/28/2024, 10:31 AMvroldanbet
05/28/2024, 10:51 AMminimize_latency
call in parallel while waiting for Redis response, and only issue the request if the token exists / redis is up healthy.
>The plan was to fall back to minimise latency with the assumption the new cache would already have the new tuple and all reads would be correct. With that do you think it is still useful to pass the zedToken even if it's 'stale'?
I think so, but you'd have to add some padding to make sure the quantization window has really elapsed. SpiceDB has 3 parameters to compute the window:
- the quantization window itself
- the crossfade factor (by default 20% if I'm not mistaken)
- follower-replica lag (only CRDB and Spanner)
The TTL in the cache is actually stored to 2 times of the total above.
@Joey can you confirm that, if storing the zedtoken for at least 1x the quantization window of the total above, clients are safe to fall back to minimize_latency
?bmullan91
05/28/2024, 10:57 AMvroldanbet
05/28/2024, 11:34 AMwilliamdclt
05/28/2024, 3:58 PMauthzed-node
to provide additional functionalities and QOL: things like client-side OTel, caching, and zedtoken handling.
For zedtoken handling we do exactly what you describe: on write we store the zedtoken in Redis (key is the user ID), on read we get the zedtoken in Redis and fallback to minimize_latency
.
There are downsides:
- It doesn't guarantee read-your-own-writes (can't have strong consistency between 2 datastores), but it is likely enough for our practical use-case (and fallback on eventual consistency)
- There's a risk of race conditions, concurrent writes could leave you with an outdated zedtoken in Redis
- We accepted that, we already accepted that we'd have eventual consistency in edge cases
- Makes me think that this could be avoided if zedtokens were lexicographically sortable 🤔 then we could make sure to only update Redis if the new key is bigger (would need some Lua scripting)
- It adds overhead. Redis is fast but so is SpiceDB: Redis is a significant portion of the total, easily double-digit ms. At the P99 it's actually very significant, multiple times SpiceDB's response time.
- The hedging described by @vroldanbet would help, but it's potentially doubling the load on SpiceDB and it's additional complexityecordell
05/28/2024, 5:20 PMvroldanbet
05/28/2024, 5:52 PMvroldanbet
05/28/2024, 5:58 PMbmullan91
05/29/2024, 8:46 AMat_least_as_fresh
however any other requests made by the user to check permissions against any other object would also incur the same latency hit.
Interesting to hear that redis can become the bottleneck when it comes to latency, so running both in parallel (redis key check and spicedb call with minimize latency) if what @vroldanbet is saying is true has no real cost we can certainly do since it's going to be the hot path i.e. only in certain circumstances will the zedtoken be in redis.vroldanbet
05/29/2024, 8:49 AMvroldanbet
05/29/2024, 8:50 AMwilliamdclt
05/29/2024, 8:50 AMwilliamdclt
05/29/2024, 8:50 AMwilliamdclt
05/29/2024, 8:53 AMbmullan91
05/29/2024, 8:58 AMvroldanbet
05/29/2024, 9:33 AMwilliamdclt
05/29/2024, 12:31 PMvroldanbet
05/29/2024, 12:48 PM