floof-rfcs/WIP/ARCHITECTURE.txt
2022-11-30 12:58:41 +01:00

45 lines
2.2 KiB
Text

mutex for distribution + packet spooler
in the elixir version, this uses one process for each of them,
but that lead to race conditions and dead locks between these parts...
ok, what do I want
* spooler should use flat files, split into multiple files when they get too big
* no TLS, it just makes stuff more complicated, and we also can't trust any nodes in the network,
besides those originating/producing messages, which have pubkeys + signatures, which we can manage through an allowlist
* it might be the case that there exist clusters of servers which trust each other,
but they probably will be connected by a trusted network, so no need for TLS there either.
* might make sense to split sockets into read, write parts
* use 2 threads per connection
* worker threadpool for filtering
* 16bit message length fields, makes abuse and such much harder (e.g. problematic stuff are mostly images and large binaries,
think mostly copyright bs, porn, etc., maybe malware sharing. don't want any of that,
even if parts would be nice, because moderating that is too much of a hassle, and really exhausting)
max 64KiB messages have another advantage: they make it much harder to completely DDOS the network
and exhaust the RAM and disk-space of participating servers.
* need multiple channels, due to feedback loop
* use non-async rust, we can use `sendfile(2)`! via std::io::copy!
* shard messages according to first bytes of signature
Channels between threads:
* initially, from spool -> conn/write
* "send :ihave for all entries in the spooler"
* conn/read -> filterpool
* "received message"
* conn/read -> conn/write
* "got :ihave, send :pull"
* filterpool -> conn/write
* "send :ihave for latest slice for recently changed pubkey (multiple such messages should be coalesced)"
* "distribute message" via sendfile
no coordination should be necessary for closed read/write channels
(the appropriate threads just terminate), but it is necessary to have
a way to schedule a re-connection after both terminate. So we need a
central list of configured peers, where the associated threads are linked
via AtomicUsize + event-listener::Event
the filterpool acquires a mutex for writing to a shard
(allocate an array with 2^16 slots for that)