42 lines
1.6 KiB
Text
42 lines
1.6 KiB
Text
spool file organization:
|
|
|
|
the directory tree looks like this:
|
|
{SigAlgo:04x}/{:02x}/{:02x}/...
|
|
the first level is the SigAlgo identifier of the signature algorithm in use.
|
|
the second and third levels are the first 2 bytes of the public key of a party,
|
|
encoded as hexadecimal.
|
|
|
|
the public keys have all the same length, and get encoded as urlsafe base64.
|
|
|
|
## per public keys ...
|
|
|
|
... there can be a bunch of associated files.
|
|
- {pubkey}.{start}.data contains the actual data (* =DataS; {start} is the urlsafe base64 encoded starting point *)
|
|
- {pubkey}.{start}.meta contains the offsets of data blobs (* =MetaS *)
|
|
- {pubkey}.lock is a lock file for the public key, which gets used to prevent overlapping writes and GCs...
|
|
|
|
<proto>
|
|
|
|
MetaS ::= [*]MetaEntry
|
|
|
|
MetaEntry ::= offset:u32(big-endian)
|
|
|
|
DataS ::=
|
|
(* @siglen is inferred from the used SigAlgo *)
|
|
[*]XferBlob<@siglen>
|
|
|
|
</proto>
|
|
|
|
... for compaction, the corresponding pubkey gets locked,
|
|
- new temporary files are created in the corresponding directory,
|
|
- the chunks get sorted, and starting from the newest blob, going reverse
|
|
- the size of the slices get calculated,
|
|
until a slice is found which hits the maximum size per pubkey
|
|
(usually available storage space * 0.8 divided by the amount of known pubkeys)
|
|
- then we cut off the remaining, not yet processed blobs/slice
|
|
- and start now from the first kept blob going forward
|
|
- write all of them to a new data file, and create a corresponding metadata file
|
|
|
|
the compaction algorithm should run roughly once every 15 minutes, but only visit
|
|
pubkeys to which data has been appended to in that timespan.
|