This thread is meant as a place for discussions about the implementation of core-js, core-rs and core-rs-albatross. The focus will be on current issues and just generally what we’re working on right now - so mostly Albatross.
Rust client API
To start things off: I’m working on the client API for core-rs-albatross. We will probably also have a client API for core-rs, since it’s easy to port the changes between those two (They have the same code-base and until recently were also managed in the same repository as seperate branches).
The client API is actually done, except for one piece: The database. We used references and thus had to parameterize most of our types with the lifetime of the database. This leads to problems when passing those types into
tokio (framework for multi-threading). It’s basically impossible without unsafe Rust and even with unsafe Rust it’s very hard to do it in a safe manner. So I started changing this to use reference counting (i.e.
std::sync::Arc). This should be done soon and I hope I can finish up the client API.
Pascal has been working on the MacroBlockSync. It’s a sync mode we thought up last hackathon when we realized that block production with Albatross would be too fast to be able to sync up. Basically validators produce blocks almost as fast as you can sync them. Atleast in our test-setup where network latency is very low.
But we can use the pBFT consensus at the end of an epoch to speed syncing up. An epoch is currently 128 blocks long - but will most likely change. At the end of the epoch all validators need to find a consensus on which chain will be finalized (if they end up with different chains at all). So they’ll basically vote on a macro block. If 2/3 accept a macro block, this block and with it all blocks before it become finalized, meaning they can’t be reversed. The signatures of all votes are stored in the macro block as a justification. So, as long as you can start from the genesis block and always verify that the justification in the next macro block is correct, you know that your chain is correct. In order to verify a new macro block you just need to know the set of active validators, which only depends on the previous macro block. Thus you can skip all micro blocks.
So our MacroBlockSync will only synchronize macro blocks until it’s about at the head of the chain - actually they’ll start syncing normally earlier to have atleast the last 2 epochs completely. This means we only have to sync 1/128 of all blocks to have a correct chain.
Of course then we’re missing the transactions from the micro blocks. After all the macro blocks don’t store any transactions at all.
Therefore we added an accumulator (i.e. a merkle tree) over all transactions of that epoch, to the macro block. With this a client can verify that a transaction was actually included in that epoch without ever having to look at the micro block it was included in. Then we just added a message to request the transactions of a epoch in bulk.
An important observation is that most micro blocks will be probably rather empty (at least for some time). This means transmitting headers and verification is quite some overhead we avoid this way.
PS: After writing this I thought that I could also add some links into the code. E.g. for micro and macro blocks. With the links you could have a quick look at how the data structures I’m talking about actually look like. Let me know if that’s something you’d actually use.
A few more thing about the MacroBlockSync came to my mind:
The signatures for block verification are BLS12-381 signatures, which can be efficiently summed up (literally an addition, which is used for voting for example). Also public keys can be derived efficiently from private keys. But verification is very slow. That’s why it’s so good that we can add them up. For the voting we aggregate signatures of all validators and in the end only have to verify one aggregate signature with all the validators public keys. That makes it much faster than verifying all individual signatures.
But yeah, verification is still slow. It’s about 100ms (at least this magnitude) for one signature. So if we skip 127 micro blocks, we save 12.7 seconds of just verifying that the signatures are correct. We only have to verify the 2 BLS signatures in the macro block, plus possibly a BLS signature for a view change that occured for the macro blocks. So with MacroBlockSync we only take about 300ms, instead of 13 seconds - for the current epoch length of 128 blocks.
So even if you consider full micro blocks, it’s still much faster.
Compared to NiPoPoW it still doesn’t scale as well. NiPoPow only needs to verify a logarithmic amount of blocks. So if the chain is n blocks long, we only need to verify log2(n) blocks on average. So the verification time grows sub-linearly. For the MacroBlockSync it still grows linearly, even though we safe a big proportion of time compared to full sync.
Also with full sync or macro sync you don’t have to sync the transactions, as we still have a merkle tree in all micro blocks and macro blocks. Thus you can proof that a transaction is valid without syncing the block bodies. Which is good for light-weight environments such a browser.
But of course we would like to have something like nano or pico consensus for Albatross, and we already found something that might work for Albatross. But this consensus mode will need a little bit more research and Pascal knows more about it than me. So we’ll publish information about this once we really look into it.
Just a quick correction: The verification time for a BLS signature is about 100 ms in debug mode. The verification time in a release build will be much faster. From what I observed, it is about 100x faster - but I didn’t run a proper benchmark. So the analysis of speedup wasn’t correct. the factor of speed-up remains the same though. Also Macro Block Sync still has the big advantage of not syncing so much data for almost empty micro blocks.
This is (IMO) the best thread here, really informative and interesting.
Just one question (it’s related with the coin I’m working on), have you considered some kind of hybrid PoW-PoS schema? Something like 3-4 macroblocks minted with PoS and then one minted with PoW. (That how our coins works.) As you know, PoW even if it’s not very (let’s say) ecologic, it has its own advantages, it adds extra security (hard to break) to the blockchain (that’s why BTC blockchain is the most secure data base on the world).
Please, keep us informed.
So what coin are you working on out of curiosity? I’m also looking forward to seeing the answer to your question
My personal opinion of hybrid is it’s not really necessary long term. Maybe during the transition it’d make sense, but eventually full POS should be the goal imo. POW only provides a little extra security over POS, and the true security of POW is only seen in a full POW chain imo (like how BTC is pure POW and the most secure blockchain to date).
Great thread, thank you for documenting the work and your reasoning @jgraef