Home / Input Output / ouroboros-network
142 commits this week Feb 20, 2020 - Feb 27, 2020

Merge #1655

1655: Bring OuroborosApplication into line with MuxApplication r=karknu a=dcoutts

In a previous patch series we changed the structure of the MuxApplication to simply use a list of mini-protocols, rather than indexing on a mini-protocol enumeration type. That PR kept the OuroborosApplication interface the same however.

This patch finishes that job. Now the OuroborosApplication is also structured as a list of mini-protocols, and there are no more mini-protocol enum types.

This will later allow protocol versioning to be made simpler since now we just have value-level things to deal with, rather than versionnegotiation leading to a separate protocol enum type, requiring existentials etc.

This patch also makes heavier use of the MuxPeer type. The intention is that in future we will not need to have use sites calling runPeer directly, which will make it easier to change the protocol driver. For now however there are various cases where we must use runPeer directly and so we provide MuxPeerRaw as an escape hatch. This provides a direction of travel however.

This builds on top of #1610, so it’s only worth looking at the last 4 patches.

Co-authored-by: Duncan Coutts [email protected] Co-authored-by: Karl Knutsson [email protected]

Merge #1709

1709: ChainDB: add blocks asynchronously r=edsko a=mrBliss

Fixes #1463.

Instead of adding blocks synchronously, they are now put into a queue, after which addBlockAsync returns an AddBlockResult, which can be used to wait until the block has been processed.

A background thread will read the blocks from the queue and add them synchronously to the ChainDB. The queue is limited in size; when it is full, callers of addBlockAsync might still have to wait.

With this asynchronous approach, threads adding blocks asynchronously can be killed without worries, the background thread processing the blocks synchronously won’t be killed. Only when the whole ChainDB shuts down will that background thread get killed. But since there will be no more in-memory state, it can’t get out of sync with the file system state. On the next startup, a correct in-memory state will be reconstructed from the file system state.

By letting the BlockFetchClient add blocks asynchronously, we also get a 20-40% bulk chain sync speed-up in some microbenchmarks.

Co-authored-by: Thomas Winant [email protected]

Merge #1701

1701: VolatileDB: Unify and extend getters r=mrBliss a=kderme

Related issue: https://github.com/input-output-hk/ouroboros-network/issues/1684 This is done:

Unify getIsMember and getPredecessor, and return BlockInfo

This is not done yet:

Possibly unify it with getSuccessors too, e.g., using return a pair of both results.

Co-authored-by: kderme [email protected] Co-authored-by: Thomas Winant [email protected]

ChainDB: add blocks asynchronously

Fixes #1463.

Instead of adding blocks synchronously, they are now put into a queue, after which addBlockAsync returns an AddBlockResult, which can be used to wait until the block has been processed.

A background thread will read the blocks from the queue and add them synchronously to the ChainDB. The queue is limited in size; when it is full, callers of addBlockAsync might still have to wait.

With this asynchronous approach, threads adding blocks asynchronously can be killed without worries, the background thread processing the blocks synchronously won’t be killed. Only when the whole ChainDB shuts down will that background thread get killed. But since there will be no more in-memory state, it can’t get out of sync with the file system state. On the next startup, a correct in-memory state will be reconstructed from the file system state.

By letting the BlockFetchClient add blocks asynchronously, we also get a 20-40% bulk chain sync speed-up in some microbenchmarks.

ChainDB: add blocks asynchronously

Fixes #1463.

Instead of adding blocks synchronously, they are now put into a queue, after which addBlockAsync returns an AddBlockPromise, which can be used to wait until the block has been processed.

A background thread will read the blocks from the queue and add them synchronously to the ChainDB. The queue is limited in size; when it is full, callers of addBlockAsync might still have to wait.

With this asynchronous approach, threads adding blocks asynchronously can be killed without worries, the background thread processing the blocks synchronously won’t be killed. Only when the whole ChainDB shuts down will that background thread get killed. But since there will be no more in-memory state, it can’t get out of sync with the file system state. On the next startup, a correct in-memory state will be reconstructed from the file system state.

By letting the BlockFetchClient add blocks asynchronously, we also get a 20-40% bulk chain sync speed-up in some microbenchmarks.

ChainDB: add blocks asynchronously

Fixes #1463.

Instead of adding blocks synchronously, they are now put into a queue, after which addBlockAsync returns an AddBlockPromise, which can be used to wait until the block has been processed.

A background thread will read the blocks from the queue and add them synchronously to the ChainDB. The queue is limited in size; when it is full, callers of addBlockAsync might still have to wait.

With this asynchronous approach, threads adding blocks asynchronously can be killed without worries, the background thread processing the blocks synchronously won’t be killed. Only when the whole ChainDB shuts down will that background thread get killed. But since there will be no more in-memory state, it can’t get out of sync with the file system state. On the next startup, a correct in-memory state will be reconstructed from the file system state.

By letting the BlockFetchClient add blocks asynchronously, we also get a 20-40% bulk chain sync speed-up in some microbenchmarks.

Merge #1705

1705: network: add IsIdle argument to the PeerFetchStatusReady fingerprint r=dcoutts a=nfrisby

Fixes #1147.

I’ve been using this commit during all of my recent development on the tests — without it, Issue #1147 failures have occasionally masked my other focus.

I still don’t have a repro on master, but I think I’ll be able to find one after a couple more PRs are merged — the #1147 failure has arisen during my development even without the simulated network latency that originally revealed it.

Disclaimer: though this commit seems well-vetted by the use in the not-yet-on-master tests mentioned above, I do not understand the BlockFetch code very well, and moreover it’s been a while since I actually wrote this code (in particular, I’m not exactly sure what it is that the Boolean adds beyond checking if the adjacent set is empty). It’d be wonderful to see a localized test case for this.

Co-authored-by: Nicolas Frisby [email protected]

Merge #1702

1702: Remove ErrorHandling r=mrBliss a=mrBliss

Fixes #1682.

This abstraction allowed choosing how errors would be thrown or caught. In the real implementation we’d use exceptions and in the model we’d use Either or Except. However, we have since rewritten our models to use pure functions instead of living in some monad m, so they no longer have to fit the APIs exactly, making this abstraction redundant.

Co-authored-by: Thomas Winant [email protected]

Merge #1702

1702: Remove ErrorHandling r=mrBliss a=mrBliss

Fixes #1682.

This abstraction allowed choosing how errors would be thrown or caught. In the real implementation we’d use exceptions and in the model we’d use Either or Except. However, we have since rewritten our models to use pure functions instead of living in some monad m, so they no longer have to fit the APIs exactly, making this abstraction redundant.

Co-authored-by: Thomas Winant [email protected]