This cleans up the `Router` code and adds a bunch of tests. These sorts of systems are a real pain to test, since they have a bunch of asynchronous goroutines living their own lives, so the test coverage is decent but not fantastic. Luckily we've been able to move all of the complex peer management and transport logic outside of the router, as synchronous components that are much easier to test, so the core router logic is fairly small and simple.
This also provides some initial test tooling in `p2p/p2ptest` that automatically sets up in-memory networks and channels for use in integration tests. It also includes channel-oriented test asserters in `p2p/p2ptest/require.go`, but these have primarily been written for router testing and should probably be adapted or extended for reactor testing.
This improves the `peerStore` prototype by e.g.:
* Using a database with Protobuf for persistence, but also keeping full peer set in memory for performance.
* Simplifying the API, by taking/returning struct copies for safety, and removing errors for in-memory operations.
* Caching the ranked peer set, as a temporary solution until a better data structure is implemented.
* Adding `PeerManagerOptions.MaxPeers` and pruning the peer store (based on rank) when it's full.
* Rewriting `PeerAddress` to be independent of `url.URL`, normalizing it and tightening semantics.
## Description
Update the faux router to either drop channel errors or handle them based on an argument. This prevents deadlocks in tests where we try to send an error on the mempool channel but there is no reader.
Closes: #5956
@p4u from vocdoni.io reported that the mempool might behave incorrectly under a
high load. The consequences can range from pauses between blocks to the peers
disconnecting from this node.
My current theory is that the flowrate lib we're using to control flow
(multiplex over a single TCP connection) was not designed w/ large blobs
(1MB batch of txs) in mind.
I've tried decreasing the Mempool reactor priority, but that did not
have any visible effect. What actually worked is adding a time.Sleep
into mempool.Reactor#broadcastTxRoutine after an each successful send ==
manual control flow of sort.
As a temporary remedy (until the mempool package
is refactored), the max-batch-bytes was disabled. Transactions will be sent
one by one without batching
Closes#5796
When set to true, an invalid transaction will be kept in the cache (this may help some applications to protect against spam).
NOTE: this is a temporary config option. The more correct solution would be to add a TTL to each transaction (i.e. CheckTx may return a TTL in ResponseCheckTx).
Closes: #5751
After a reactor has failed to parse an incoming message, it shouldn't output the "bad" data into the logs, as that data is unfiltered and could have anything in it. (We also don't think this information is helpful to have in the logs anyways.)
`abci.Client`:
- Sync and Async methods now accept a context for cancellation
* grpc client uses context to cancel both Sync and Async requests
* local client ignores context parameter
* socket client uses context to cancel Sync requests and to drop Async requests before sending them if context was cancelled prior to that
- Async methods return an error
* socket client returns an error immediately if queue is full for Async requests
* local client always returns nil error
* grpc client returns an error if context was cancelled before we got response or the receiving queue had a space for response (do not confuse with the sending queue from the socket client)
- specify clients semantics in [doc.go](https://raw.githubusercontent.com/tendermint/tendermint/27112fffa62276bc016d56741f686f0f77931748/abci/client/doc.go)
`mempool.TxInfo`
- add optional `Context` to `TxInfo`, which can be used to cancel `CheckTx` request
Closes#5190
found out this issue when trying to decouple mempool reactor with its
underlying clist implementation.
according to its comment, the test `TestReactorNoBroadcastToSender` is
intended to make sure that a peer shouldn't send tx back to its origin.
however, the current test forgot to init peer state key, thus the code
will get stuck at waiting for peer to catch up state *instead of* skipping
sending tx back:
b8d08b9ef4/mempool/reactor.go (L216-L226)
this PR fixes the issue by init peer state key.
## Description
Add test vectors for all reactors
- [x] state-sync
- [x] privval
- [x] mempool
- [x] p2p
- [x] evidence
- [ ] light?
this PR is primarily oriented at testvectors for things going over the wire. should we expand the testvectors into types as well?
Closes: #XXX
## Description
This PR wraps the stdlib sync.(RW)Mutex & godeadlock.(RW)Mutex. This enables using go-deadlock via a build flag instead of using sed to replace sync with godeadlock in all files
Closes: #3242
In order to have more control over the mempool implementation,
introduce a new exported function RemoveTxByKey.
Export also TxKey() and TxKeySize. Use TxKeySize const instead of
sha256.size, so future changes on the hash function won't break the API.
Allows using a TxKey (32 bytes reference) as parameter instead of
the complete array set. So the application layer does not need to
keep track of the whole transaction but only of the sha256 hash (32 bytes).
This function is useful when mempool.Recheck is disabled.
Allows the Application layer to implement its own cleaning mechanism
without having to re-implement the whole mempool interface.
Mempool.Update() would probably also need to change from txBytes to txKey,
but that would require to change the Interface thus will break backwards
compatibility. For now RemoveTxByKey() looks like a good compromise,
it won't break anything and will help to solve some mempool issues from the
application layer.
Signed-off-by: p4u <pau@dabax.net>
## Description
To provide the ability to add more message types without needing to cause a breaking change the mempool message was migrated to a oneof.
Closes: #XXX
* proto: move mempool to proto
- changes according to moving the mempool reactor to proto
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
Closes: #2883
Closes#4603
Commands used (VIM):
```
:args `rg -l errors.Wrap`
:argdo normal @q | update
```
where q is a macros rewriting the `errors.Wrap` to `fmt.Errorf`.