I believe that this, in my testing seems to help the e2e state-sync
tests complete more reliably, by fixing some potential, range-related
slice building, as well as the way the test app hashes snapshots.
Additionally, and I'm not sure if we want to do this, but I added this
hook to the reactor that re-sends the request for snapshots during the
retry. This helps in tests prevent systems from getting stuck, but I
think in reality, it might create more traffic, and operators would
just restart a state-syncing node to get a similar effect.
* add time warping lunatic attack test
* create too high and connecton refused errors and add to the light client provider
* add height check to provider
* introduce block lag
* add detection logic for processing forward lunatic attack
* add node-side verification logic
* clean up tests and formatting
* update adr's
* update testing
* fix fetching the latest block
* format
* update changelog
* implement suggestions
* modify ADR's
* format
* clean up node evidence verification
## Description
- Add `context.Context` to Privval interface
This pr does not introduce context into our custom privval connection protocol because this will be removed in the next release. When this pr is released.
also
- replace `MaxReconnectAttempts`, `ReadWait`, `WriteWait` and `PingPeriod` options with `WSOptions` in `WSClient` (rpc/jsonrpc/client/ws_client.go).
- set default write wait to 10s for `WSClient`(rpc/jsonrpc/client/ws_client.go)
- unexpose `WSEvents`(rpc/client/http.go)
Closes#6162
```
// unbuffered
out, err := httpClient.Subscribe(ctx, "event.type=NewTx and account.name=Jack", 0)
// buffered
out, err := httpClient.Subscribe(ctx, "event.type=NewTx AND account.name=Jack", 20)
```
Before: when the `out` channel is buffered and becomes full, we drop an event (+ log the error)
After: when the `out` channel is buffered and becomes full, we block
**Before it was not apparent to the app when an event was dropped (looking at the logs is manual task). After this PR, if the user does not read from `out` on 1 subscription, all other subscriptions will be stuck too.**
Closes#6161
Fixes the race condition between a callback being set and called during ReCheckTx. Note, I do not see equivalent logic in the gRPC client (anymore) as #5439 suggests, so only the socket client was updated.
closes: #5439
@p4u from vocdoni.io reported that the mempool might behave incorrectly under a
high load. The consequences can range from pauses between blocks to the peers
disconnecting from this node.
My current theory is that the flowrate lib we're using to control flow
(multiplex over a single TCP connection) was not designed w/ large blobs
(1MB batch of txs) in mind.
I've tried decreasing the Mempool reactor priority, but that did not
have any visible effect. What actually worked is adding a time.Sleep
into mempool.Reactor#broadcastTxRoutine after an each successful send ==
manual control flow of sort.
As a temporary remedy (until the mempool package
is refactored), the max-batch-bytes was disabled. Transactions will be sent
one by one without batching
Closes#5796
When set to true, an invalid transaction will be kept in the cache (this may help some applications to protect against spam).
NOTE: this is a temporary config option. The more correct solution would be to add a TTL to each transaction (i.e. CheckTx may return a TTL in ResponseCheckTx).
Closes: #5751