Maybe the problem is that I m
# spicedb
j
Maybe the problem is that I'm configuring tlsSecretName but not dispatchUpstreamCASecretName or one of the other variables? Is there a guide that shows which of these I need to set?
Ah yea the operator does mention it should be set to the CA cert of the tlsSecretName, ok https://authzed.com/docs/spicedb/operator
e
Yeah it's there but we can definitely improve those docs
j
you were right, disabling dispatch fixed the issue. going to do that for now and then figure out how to fix it. I've been looking at that example a little bit but will need to look more closely. Our use case is PostgreSQL running somewhere with SpiceDB running in-cluster and services accessing it running in the same cluster. I'll contribute an example later if we can get things working. I'm not using cert-manager right now because I couldn't figure out an easy way to inject the CA cert into a different secret in the consuming service namespace (i.e. we have services in namespace foo and they need to communicate with spicedb over TLS, so the consuming namespace foo needs the CA cert and the spicedb namespace needs the cert + key)
Would some kind of defaulting make sense? If the dispatch cert isn't provided, can't SpiceDB use the cert in tlsSecretName that it's presenting for the non-dispatch port?
Also kinda strange in general that cert issues result in timeout errors rather than, well, an x509 error of some kind
e
generally the certs you use for dispatch are not the same you use for securing incoming request traffic
so the CAs will be different
j
Hmm. Maybe our deployment is a bad idea? Our plan is to run everything within-cluster, so everything is signed with a cert for spicedb.spicedb.svc.cluster.local, and it seemed like that could also be the cert used for validating dispatch APIs, right?
e
that all sounds good to me, you can definitely use the same CA for both things, but you will have issues if you rotate the CA at the same time as the certs
I haven't used any of these, but there is this too: https://cert-manager.io/docs/tutorials/syncing-secrets-across-namespaces/ we just put ingress resources in the same namespace as spicedb, and then talk to it through that. that's generally better to do unless your consuming applications are going to use
kuberesolver
to get the list of nodes - running an envoy like in the example in the repo will load balance for you
j
We use Istio for our ingress gateway, we could set all that up but it seemed like more work than we wanted to take on right now
What's the benefit of using an ingress when all access is within-cluster? Is it just load balancing? I figured we'd get some amount of load balancing via the service DNS and dispatch mechanism
e
yeah the main benefit is load balancing, but it's not strictly necessary if you're looking for ways to simplify, you could: - generate a 10year self-signed ca and stick it everywhere you need - disable TLS for dispatchwith :
Copy code
dispatchClusterTLSCertPath: ""
      dispatchClusterTLSKeyPath: ""
      dispatchUpstreamCASecretName: ""
- just use the dns resolver for the grpc connection, but you might get occasional failures when pods roll or rebalance
j
Do you know of anyone using Istio to do east-west load balancing? i.e. injecting the Istio sidecars and using whatever Istio does to load balance/route traffic? I think it means we'd have double encryption, though (encryption at the Istio layer with mTLS and then encryption again at the SpiceDB level, which we need because the client needs to see SpiceDB as encrypted, due to that outstanding authzed-py/gRPC issue)
e
I think we've had some folks mention using istio in discord, but I don't think anyone has shared a full writeup, and I can't think of any customers doing it off the top of my head. I don't know of any reason it wouldn't work, though. Not sure if it would be double-encryption or just re-encryption. For what it's worth, we use contour/envoy and don't have any issues doing reencryption (i.e. external traffic uses public certs, envoy re-encrypts it to use the internal certs before sending the traffic to spicedb pods). Istio would just be a different way of running envoy. @bison used to work on istio and might have some insights for us (he's out today though)
BTW, sorry if this is too inside baseball, but this is part of the reason we don't have ingress management in the open source operator, there's just too many options to support. Ideally the Gateway APIs would support everything we needed and then you'd just plug in your favorite gateway provisioner and call it a day, but right now some features SpiceDB needs are missing...like grpc re-encryption.
j
I appreciate the inside baseball! Really useful to have the context IMO
26 Views