Is making a bulk permission request with
# spicedb
r
Is making a bulk permission request with ~1k checks feasible? Right now I'm seeing the request take several seconds, despite having a lot of the same subproblems.
v
same resource different subjects? different resources same subjects? same resource and subject but different permission? or a combination or all the above? 1000 elements is probably going to cause a lot of fan out, you want to adequately scale up in number of cores allocated, is this runing in your machine or in your dev/prod env?
r
is basically different resources, same subject, same permission. The resource checks will also share the same subproblems. I also will be evaluating caveats in the call, probably 1 per check. Are there common settings I should look in to tweaking along with increasing resources? This is in our staging and production envs.
I saw a limit on concurrent go routines that defaults to 50. I'm wondering if I should be increasing that along side more cores.
v
your scenario is the best case scenario for bulk check since it can query the database more efficiently, so I'd expect better behaviour there. My suggestion is check where time is being spent with OpenTelemetry
r
Yeah, I still need to set that up 😦
3 Views