Bring back the old behavior
Signed-off-by: Sasha Bogicevic <[email protected]>
Signed-off-by: Sasha Bogicevic <[email protected]>
Not sure if reusing scaling-factor is good here Signed-off-by: Sasha Bogicevic <[email protected]>
Signed-off-by: Sasha Bogicevic <[email protected]>
Signed-off-by: Sasha Bogicevic <[email protected]>
When generating datasets we also run benchmarks but using a `single` command just runs the existing dataset. Signed-off-by: Sasha Bogicevic <[email protected]>
Signed-off-by: Sasha Bogicevic <[email protected]>
We should remove this in some regular pr not dealing with benchmarks only Signed-off-by: Sasha Bogicevic <[email protected]>
Signed-off-by: Sasha Bogicevic <[email protected]>
Now we pass in wanted number of txs and we also output the number of fanout outputs so we can track the fanout performance. Signed-off-by: Sasha Bogicevic <[email protected]>
Signed-off-by: Sasha Bogicevic <[email protected]>
> NodeState now tracks currentChainPoint :: ChainPointType tx, instead of currentSlot :: ChainSlot + currentChainTime :: Maybe UTCTime > NodeSynced and NodeUnsynced now carry ChainPointType tx, instead of chainTime :: UTCTime > extended IsChainState class with chainPointTime :: ChainPointType tx -> UTCTime > initialChainTime = posixSecondsToUTCTime 0
* extended NodeSynced and NodeUnsynced with time and slot drifts * added Haddocks to both currentSlot and currentChainTime * made NodeState.currentChainTime non optional * defined initialChainTime = posixSecondsToUTCTime 0
> because we have no recent view of the chain since we are out of sync
> when calculating the slot drift
> the former gets updated upon ticks the later upon head transitions observed, which usually is older than ticks
This reverts commit 037aea1d9f15f65db00113f0606f9e09d0b03548.
Signed-off-by: Sasha Bogicevic <[email protected]>
<!-- Describe your change here --> Follow-up https://github.com/cardano-scaling/hydra/pull/2430 > Reduce the size of the input and etcd pending broadcast queues so they are smaller than the logging queue. When running `cabal run bench-e2e -- single --cluster-size 3 --scaling-factor 100`, having upstream queues as large as or larger than the logging queue led to invalid transactions under load. This change enforces proper back pressure and prevents that behavior. --- <!-- Consider each and tick it off one way or the other --> * [X] CHANGELOG updated or not needed * [X] Documentation updated or not needed * [X] Haddocks updated or not needed * [X] No new TODOs introduced or explained herafter
> they need to be smaller than the logging queue size which is 500 by default