Update: this is more complicated than I initially ...
# spicedb
b
Update: this is more complicated than I initially anticipated. I had a quick and dirty hack in place to allow large imports, which got the job done for me, but as I started to formalize the hack into more robust code, I found a number of things that should be considered: 1. The gRPC server has a limit on message sizes. This can easily be solved by batching the requests 2. Batching is tricky, serially executing each batch works to an extent, but it's not very scalable. This could be improved by executing each batch in a goroutine. Of course, this comes with some risk, as there's roughly a 9k batch size limit (see 3). If a
zed import
is trying to import 6 million+ rows, that's about 650 connections each trying to shove through 9k tuples. 3. Postgres and MySQL both have a limit on how many placeholders you can have in a single query. This appears to be 65535 for Postgres and MySQL (https://stackoverflow.com/a/49379324, https://stackoverflow.com/a/24447922). This roughly translating to a maximum of 9362 relationship tuple writes (assuming each is going to require 7 placeholders). I didn't initially hit this limitation because I was testing with an in memory database. The conclusion is, I think augmenting
zed import
to support large scale relationship sets will require a bit more design and planning than my initial hacky batching solution offers and might extend to making changes to SpiceDB, not just
zed
so it's possible to batch the batches (this is really just turtles all the way down 🐢 ).