One SpiceDB clusters per Kubernetes cluster - shar...
# spicedb
p
We're for various historical reasons running multiple EKS clusters for one of our applications where we are looking at using SpiceDB, and want to share the same authorization model across all our services in those EKS clusters (all running in the same region and same VPC). Is this possible to do with SpiceDB so that they share the same PostgreSQL database, and if so, would we need any configuration or would it "just work" if we use the spicedb operator in each EKS cluster to setup the SpiceDBCluster? Would this for example cause problems for a multi-stage upgrade that the operator might need to perform when there's such a database change (since the multi-stage upgrade wouldn't be coordinated across EKS clusters)? I tried to find some documentation on how to run multiple SpiceDB clusters (if that's how you do it?) or is all of this not possible and we should use CockroachDB instead, like it would have been a multi-region deployment (even though it's in the same network)?
I assume that the SpiceDB upgrade could be managed by us coordinating the new SpiceDB upgrade across all EKS clusters ourselves instead of letting the operator do it: https://authzed.com/docs/spicedb/operator#manual-upgrades
v
It perfectly possible to run SpiceDB in multiple EKS clusters against the same underlying database. The challenge would be, as you noted, with upgrades: I think multiple operators can work with the same underlying database, as long as you coordinate upgrades in unison across all clusters. The operator would move a cluster forward to version you've indicated (rapid channel / stable channel, or a pinned version / image). The operator will run the necessary migrations, which are typically backward compatible. Multiple operators running the same migration is probably ok as they are meant to be idempotent, but it may be preferable to trigger the updates serially (e.g. update 1.26.0 -> 1.27.0 in one cluster at a time) to rest assured. However multi-stage migrations would require coordination: all clusters would have to move in unison because there are steps. You cannot let the operator move your clusters automatically through the different steps or else other clusters pending to be updated could be at risk of having issues with the datastore. This is an interesting use-case and I think @User has probably thought about it
I've certainly run multi-cluster SpiceDB at $PREVIOUS_JOB, just not with the operator
I don't think there is documentation about this, and I think we should add some, so feel free to open an issue in the operator repo
e
Yep, what Victor described is how we do it internally as well. Let's say you have three SpiceDB clusters in different EKS clusters, each managed by the operator: - Make sure they're all on the same channel (
stable
is the only option upstream right now) - Make sure they're all on the same version; you'll want to explicitly set the
version
field when running multiple clusters. - The status block of each SpiceDBCluster will indicate when there are new versions available - Pick one one three SpiceDBClusters to be the "migration leader" and set
spec.version = the next version
. That cluster will be the migration leader and run the migrations. - Once that cluster has sucessfully upgraded, set the spec.version on the rest of the clusters. Optionally, you can set
skipMigrations: true
on the non-migration leaders to rollout faster, though it's not a big deal if you don't; the other clusters will just run fast no-op migration jobs. This is definitely deserving of a doc. Internally we have another operator that coordinates this rollout across multiple kube clusters, but it's highly specialized and at least right now we don't have plans for open-sourcing (but if that sounds like something you'd be interested in, let us know)