Hihi & happy new year from Iceland 💥
# spicedb
h
Hihi & happy new year from Iceland 💥 Sadly my permission-service started having context deadlines over xmas and I found that the spicedb-operator started spewing logs that are flagged as errors in datadog on an 1m interval. Not finding any commits to our environments that could have started this issue...
Copy code
file_informer.go:238]  "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
controller.go:228]  "msg"="loading config" "path"="/opt/operator/config.yaml"
controller.go:257]  "msg"="config hasn't changed" "new hash"=93119378785381803 "old hash"=93119378785381803
pkg/mod/github.com/ecordell/client-go@v1.28.0-patchmeta/tools/cache/reflector.go:229: Watch close - /v1, Resource=secrets total 10 items received
Is the spicedb-operator stuck in a crash loop or are these logs possibly not related?
b
Hi there and Happy New Year! I think that is probably unrelated. It's normal for the long-running watch requests to periodically close and be re-opened.
h
It's possible that the context deadline was due to another change, where I am setting consistency on CheckPermission and ReadRelationship to FullyConsistent on a specific environment.
b
That sounds much more likely to cause a deadline exceeded error. That can be very expensive as it results in less caching. The message from the operator is about the watch requests to the Kubernetes API server.
h
Yes, I set the FullyConsistent flag on a specific environment due to client tests that needed the exact info right away, but using the zed tokens with multiple permission-service pods was getting tad complicated.
2 Views