My gRPC client is getting this error
# spicedb
n
My gRPC client is getting this error when a lot is being written into SpiceDB. Is the flag referenced one I'd set in my client or in SpiceDB itself? Thanks in advance! https://cdn.discordapp.com/attachments/844600078948630559/1245868963511013417/image_720.png?ex=665a5164&is=6658ffe4&hm=113b3b2a68f4e63161b96d9a9b417dcf8d493caadd163d407cbe907d8b5ae891&
"a lot is being written" = high write load of ~130/sec
j
did you increase the watch buffer size?
v
there are two flags here to play with:
Copy code
--datastore-watch-buffer-length uint16                            how large the watch buffer should be before blocking (default 1024)
      --datastore-watch-buffer-write-timeout duration                   how long the watch buffer should queue before forcefully disconnecting the reader (default 1s)
n
Ah. My question (better phrased): Where do I set those flags? (fyi, we've "configured & deployed" the operator in EKS as per the guide @ https://authzed.com/docs/spicedb/ops/deploying-spicedb-on-eks#configuring-and-deploy-spicedb)
(fyi, my timezone is UTC-0600; sorry for responding slowly to other timezones)
v
You put it in the config block of your SpiceDBCluster. You'd need to camel-case the flag name, like "datastoreWatchBufferWriteTimeout"
n
Are the flags meant to be used together? Or only one at a time?
If together, is there an ideal rate I should be targeting (e.g. a little more/less than X% of what my network supports)?
And thank you very much!
v
it depends on the rate your are writing to SpiceDB, and the rate you are consuming from the Watch API. If each time your client gets an event from the Watch API it performs an operation that take X seconds to process, then at the very least your timeout should be set to the maximum value observed there. You can also increase the buffer if you observe spikes in events.
n
Thank you!
11 Views