remove storage of intermediate checkpoints in restore blocks
This was a perhaps too-early optimization trying to reduce the time of rolling back. The `sparseCheckpoint` function return an empty list for pretty much the entire restoration, except when reaching the last k blocks where it'll return a list of checkpoints to save, sparse for older blocks and dense near the tip. With the current parameters, blocks are kept if: - Their blockheight is not older than (tip - k) or it is 0 - And their blockheight is a multiple of 100 or, they are near within 10 blocks from the last known block. We currently do this calculation and filtering in two places: 1. In `restoreBlocks`, to pre-filter checkpoints to store in the database 2. In `prune` from the wallet DBLayer, to garbage collect old checkpoints Yet, what (1) buys us is a very little gain on standard wallet, and a huge performance cost on large wallets. So let's analyze the two cases: A/ Small Wallets - The time to create a checkpoint is very small in front of the slot length. - Restoring blocks is fast, (up to 10K blocks per seconds on empty wallets). Therefore, rolling back of 2, 3 blocks or, 100 makes pretty much no difference. Being able to store more checkpoints near the tip adds very little benefits in terms of performances especially, for the first restoration. B/ Large Wallets - The time to create a checkpoint is important in front of the slot length (we've seen up to 4s). - Restoring blocks is still quite fast (the time needed for processing blocks remains quite small in front of the time needed to read and create new checkpoints). The main problem with large wallets occur when the wallet is almost synced and reaches the 10 last blocks of the chain. By trying to store intermediate checkpoints, not only does the wallet spent 10* more time in `restoreBlocks` than normally, but it also keep the database lock for all that duration. Consider the case where the wallet takes 4s to read, and 4s to create a checkpoint, plus some additional 2s to prune them (these are actual data from large exchanges), by default, 10s is spent for creating one checkpoint. And at least 40 more to create the intermediate ones. During this time, between 1 and 3 checkpoints have been created. So it already needs to prune out the last one it spends 12s to create and needs already to create new checkpoints right away. As a consequence, a lot of other functionalities are made needlessly slower than they could be, because for the whole duration of the `restoreBlocks` function, the wallet is holding the database lock. Now, what happen if we avoid storing the "intermediate" checkpoints in restore blocks: blocks near the tip will eventually get stored, but one by one. So, when we _just_ reach the top for the first time, we'll only store the last checkpoint. But then, the next 9 checkpoints created won't be pruned out. So, the worse that can happen is that the wallet is asked to rollback right after we've reached the tip and haven't created many checkpoints yet. Still, We would have at least two checkpoints in the past that are at most 2K blocks from the tip (because we fetch blocks by batches of 1000). So it's important that the batch size remains smaller than `k` so that we can be sure that there's always one checkpoint in the database.