Hi all, I'm working on setting up the
# spicedb
j
Hi all, I'm working on setting up the spicedb-operator but don't want to give it more permissions than it truly needs. Anyone know if it's possible to withhold some of the kubernetes permissions the operator expects without it affecting its functionality? Specifically I'd like to not grant it the ability to create or patch Role and RoleBinding resources within our cluster. However, if I try do that the operator logs a lot of warnings and it fails to provision spicedb clusters. I believe this is configured in the ensureRoleBinding function. Logs I'm seeing:
Copy code
"requeueing after api error" err="context canceled" syncID="H2LCy" controller="spicedbclusters" obj={"name":"spicedbcluster","namespace":"namespace"}
"requeueing after error" err="rolebindings.rbac.authorization.k8s.io \"spicedbcluster\" is forbidden: User \"system:serviceaccount:spicedb-operator:spicedb-operator\" cannot patch resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"namespace\"" syncID="j8aUD" controller="spicedbclusters" obj={"name":"spicedbcluster","namespace":"namespace"}
e
there's not really a way around that at the moment. The operator needs to provision a service account for spicedb pods that has specific permissions for peer discovery
but we could allow you to specify the serviceaccount up front (RBAC would be up to you) and the operator could just confirm the account has the permissions that it needs?
j
Thanks @ecordell, yeah that'd be a great alternative. I tried doing something similar by creating the resources manually beforehand but I wasn't able to get it working.
e
Would you mind opening a GH issue with what you're looking for? If there's any other permissions you'd like to avoid that would help as well. IMO the ideal would be that anything it has permission to do it does, and anything it doesn't have permission to do it just warns you about / tells you about in the status. But that will be a bit of work to get there.
j
Of course, will file a GH issue for this.
20 Views