Yeah I see the following ``` kubectl apply --serv...
# spicedb
i
Yeah I see the following
Copy code
kubectl apply --server-side -k  .
error: rawResources failed to read Resources: Load from path cert-manager failed: 'cert-manager' must be a file (got d='/code/authzed/spicedb-operator/examples/cockroachdb-tls-ingress/cert-manager')
exit status 1
e
what version of kubectl are you using?
i
server side is 1.25.0
Ok it seemed to be an error in the kubectl version I was running client side
But seems to always get this CRDs error when deploying. Even after a few times I will still get at least one for db
Copy code
error: resource mapping not found for name: "cockroachdb-budget" namespace: "cockroachdb" from ".": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
ensure CRDs are installed first
e
Ah, the cockroach operator does not yet support kube 1.25, see: https://github.com/cockroachdb/cockroach-operator/pull/929
for now that specific example will only work on kube <1.25 until we get a new version of crdb-operator
i
Nice. First too old version and then too new of a version 😂 Thanks for the help
e
yeah there's a sweet spot apparently! hopefully crdb operator updates and we can remove the upper bound
i
Yeah looks good though
I have another question. Using 1.24.6, and I see that everything "looks" healthy. But I do get these weird messages on about a 15 sec interview for about 5 seconds, when making requests to spicedb using cockroachdb.
Copy code
11:59AM FTL failed to write schema error="rpc error: code = Unknown desc = unable to list namespaces: ERROR: relation \"namespace_config\" does not exist (SQLSTATE 42P01)"
e
what are you using to talk to spicedb?
zed
? and do you see any error messages in the status of the
SpiceDBCluster
?
i
Yeah I am using zed
Let me check
I see the issue. I have three node but running in single node cluster mode
Thanks for the quick responses
e
how did you set up your 3 node cluster? if you set
spec.config.replicas
in the SpiceDBCluster object, it should give you 3 nodes and ensure everything is is configured correctly
oh you're talking about crdb
yeah if you scale up crdb in singlenode mode they don't share data