Hi I m having some problems with memory
# spicedb
r
Hi. I'm having some problems with memory consumption. We recently installed SpiceDB on a Kubernetes cluster using the official operator. We haven't started using it, yet we saw memory consumption creeping upward continuously until we removed it again. We use a PostgreSQL database running in AWS RDS. No other external dependencies (no metrics collection, telemetry, etc.). Any ideas what could be causing this? Suggestions for what to look for? Any config options we should look more closely at? Thankful for any hints. 🙇
e
Can you clarify what memory consumption was creeping up? The memory of the spicedb-operator pod? How did you install it, and at what version?
r
The actual SpiceDB pods. Installed the operator from https://github.com/authzed/spicedb-operator/releases/latest/download/bundle.yaml recently. Seems to have gotten me version 1.8.0 (ghcr.io/authzed/spicedb-operator:v1.8.0). The actual SpiceDB was then set up with:
Copy code
yaml
apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
#...
spec:
  config:
    replicas: 3
    datastoreEngine: postgres
    telemetryEndpoint: ""
  secretName: spicedb-config
The referenced secret contains
datastore_uri
to our PostgreSQL server and a
preshared_key
.
e
Gotcha - so this would be an issue with SpiceDB itself, not the operator. When you say creeping up, how much? and over what timeframe?
r
Unfortunately it seems we've lost the logs from that test environment now. 😞 If there's nothing in particular that we should be aware of w.r.t memory consumption, like a growing backlog of unread metrics or something, then I guess we'll wait and see if this happens again.
e
I can't think of anything that would happen with no traffic, at least not enough to be a concern
spicedb provides hotspot caching though, and they fill up with traffic
those sizes are configurable via flags
it's normal for memory use to creep up until those caches are full and then level out, and you control how big those caches are
2 Views