it highly depends on your datastore, node size and...
# spicedb
j
it highly depends on your datastore, node size and schema. we can give very rough estimates, but it really depends on those factors
p
datastore - postgres node size - its variable, it'd mainly depend on what type of resource spicedb needs more schema - how can i quantify this?
rough estimates would be really helpful too!
j
so the variables are: number of connections allowed to your database, number of CPUs available per node (you likely want at least 2-4), and mem to the pods (which is used for the local cache)
p
2 vCPUs per spicedb pod? memory (with cache in mind) - would 1gib be enough? (as a starting point, im sure i'll have to tweak this when i push this into production, but im trying to understand how i'd understand what the bottleneck would be and scale that resource accordingly)
j
2 vCPUs per pod, 1 GB of mem, 3 pods should get you to a reasonable starting point if your postgres allows for at least 60 concurrent connections (default max is
20
per SpiceDB)
p
gotcha. that helps, thank you! will report back with any findings once its in production 😄
actually while I have you - where would the resource configuration live in a
SpiceDBCluster
object? i don't think its been exposed anywhere (couldnt find any references in the spicedb-operator repo or the docs), i could add a
containers
property to the spec but not sure what the container name should be
j
@jzelinskie can provide some pointers there
p
Found this - https://github.com/authzed/spicedb-operator/issues/88 which seems to say that HPA is not currently supported if I'm running this through the operator - is there a more idiomatic way of having autoscaling based on resource utilization without using HPA? there also doesn't seem to be a way to add any node affinities/resource limits & requests etc to the deployments/pods - is this not supported when running through the operator?
j
> is there a more idiomatic way of having autoscaling based on resource utilization without using HPA? as mentioned in the issue, it isn't recommended right now to autoscale
to do so effectively, the autoscaler would need to know how to tell SpiceDB to prewarm its caches, which is tracked in the issue linked from that issue ^
p
i see. then we'd need a static number of spicedb instances running regardless of the traffic/load?
j
today it is recommended to do so, yes
figure out your peek usage (+ some leeway for a pod going out of service due to rotation), and deploy accordingly
the goal is to eventually support autoscaling in some form or fashion
4 Views