I suspect it's something on the server side. Would...
# spicedb
l
I suspect it's something on the server side. Would collection of metrics be helpful?
j
you can also try setting
--grpc-max-workers=0
on the server, which will create a new worker per request
if you set that and GOMAXPROCS on the server to something like 96, it should be as concurrent as possible (assuming it doesn't hit the limit to connect to CRDB)
l
will try it out. also please refer to the first issue in my original message. any ideas on that. could it be telling us something?
j
if the contention is due to the number of incoming connections, it could make it worse
you're basically kicking off 150K queries (or worse), which is probably overwhelming either the grpc connection pool or the goroutine scheduler
l
it's possible, will try to stagger the number of go routines fired at the same time. if that's true at some point, staggering should give us better numbers and better CPU utilization
j
yep
the way we run scale testing is to set a QPS target, and then fire off that amount of queries
and then we see what limits that hits, since it matches traffic patterns more closely in a real-world application
l
are your scale tests independent enough that I can run them?
or are they tied to your SaaS offering
j
they are internal
l
ack
j
we're hoping to publish portions of them soon, but we're still working on that part
but my recommendation would be to setup your load test to define a QPS, then kick off that many requests over a period of one second, then again the next second, etc
and see when you hit your limits that way
you can also watch how long the queries take via the exported prometheus metrics
and graph accordingly
so that way, you can set an expected input traffic level, and see your P50, P95 and P99 latencies based on that
l
what is the metrics endpoint I can monitor, do I need to do something to open up that endpoint?
j
port
9090
by default
exports Prometheus-compliant metrics
l
ok, I simply point Prometheus at this port?
j
l
after Docker port mapping ofcourse ...
ok
3 Views