blockchain/vX reactor priority was decreased because during the normal operation
(i.e. when the node is not fast syncing) blockchain priority can't be
the same as consensus reactor priority. Otherwise, it's theoretically possible to
slow down consensus by constantly requesting blocks from the node.
NOTE: ideally blockchain/vX reactor priority would be dynamic. e.g. when
the node is fast syncing, the priority is 10 (max), but when it's done
fast syncing - the priority gets decreased to 5 (only to serve blocks
for other nodes). But it's not possible now, therefore I decided to
focus on the normal operation (priority = 5).
evidence and consensus critical messages are more important than
the mempool ones, hence priorities are bumped by 1 (from 5 to 6).
statesync reactor priority was changed from 1 to 5 to be the same as
blockchain/vX priority.
Refs https://github.com/tendermint/tendermint/issues/5816
After a reactor has failed to parse an incoming message, it shouldn't output the "bad" data into the logs, as that data is unfiltered and could have anything in it. (We also don't think this information is helpful to have in the logs anyways.)
- drop Height & Base from StatusRequest
It does not make sense nor it's used anywhere currently. Also, there
seem to be no trace of these fields in the ADR-40 (blockchain reactor
v2).
- change PacketMsg#EOF type from int32 to bool
check bcR.fastSync flag when "OnStop"
fix "service/service.go:161 Not stopping BlockPool -- have not been started yet {"impl": "BlockPool"}" error when kill process
Fixes#828. Adds state sync, as outlined in [ADR-053](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-053-state-sync-prototype.md). See related PRs in Cosmos SDK (https://github.com/cosmos/cosmos-sdk/pull/5803) and Gaia (https://github.com/cosmos/gaia/pull/327).
This is split out of the previous PR #4645, and branched off of the ABCI interface in #4704.
* Adds a new P2P reactor which exchanges snapshots with peers, and bootstraps an empty local node from remote snapshots when requested.
* Adds a new configuration section `[statesync]` that enables state sync and configures the light client. Also enables `statesync:info` logging by default.
* Integrates state sync into node startup. Does not support the v2 blockchain reactor, since it needs some reorganization to defer startup.
to prevent malicious nodes from sending us large messages (~21MB, which
is the default `RecvMessageCapacity`)
This allows us to remove unnecessary `maxMsgSize` check in `decodeMsg`. Since each channel has a msg capacity set to `maxMsgSize`, there's no need to check it again in `decodeMsg`.
Closes#1503
* Added BlockStore.DeleteBlock()
* Added initial block pruner prototype
* wip
* Added BlockStore.PruneBlocks()
* Added consensus setting for block pruning
* Added BlockStore base
* Error on replay if base does not have blocks
* Handle missing blocks when sending VoteSetMaj23Message
* Error message tweak
* Properly update blockstore state
* Error message fix again
* blockchain: ignore peer missing blocks
* Added FIXME
* Added test for block replay with truncated history
* Handle peer base in blockchain reactor
* Improved replay error handling
* Added tests for Store.PruneBlocks()
* Fix non-RPC handling of truncated block history
* Panic on missing block meta in needProofBlock()
* Updated changelog
* Handle truncated block history in RPC layer
* Added info about earliest block in /status RPC
* Reorder height and base in blockchain reactor messages
* Updated changelog
* Fix tests
* Appease linter
* Minor review fixes
* Non-empty BlockStores should always have base > 0
* Update code to assume base > 0 invariant
* Added blockstore tests for pruning to 0
* Make sure we don't prune below the current base
* Added BlockStore.Size()
* config: added retain_blocks recommendations
* Update v1 blockchain reactor to handle blockstore base
* Added state database pruning
* Propagate errors on missing validator sets
* Comment tweaks
* Improved error message
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* use ABCI field ResponseCommit.retain_height instead of retain-blocks config option
* remove State.RetainHeight, return value instead
* fix minor issues
* rename pruneHeights() to pruneBlocks()
* noop to fix GitHub borkage
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
* libs/common: refactor libs common 3
- move nil.go into types folder and make private
- move service & baseservice out of common into service pkg
ref #4147
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* add changelog entry
cleanup to add linter
grpc change:
https://godoc.org/google.golang.org/grpc#WithContextDialerhttps://godoc.org/google.golang.org/grpc#WithDialer
grpc/grpc-go#2627
prometheous change:
due to UninstrumentedHandler, being deprecated in the future
empty branch = empty if or else statement
didn't delete them entirely but commented
couldn't find a reason to have them
could not replicate the issue #3406
but if want to keep it commented then we should comment out the if statement as well
* go routines in blockchain reactor
* Added reference to the go routine diagram
* Initial commit
* cleanup
* Undo testing_logger change, committed by mistake
* Fix the test loggers
* pulled some fsm code into pool.go
* added pool tests
* changes to the design
added block requests under peer
moved the request trigger in the reactor poolRoutine, triggered now by a ticker
in general moved everything required for making block requests smarter in the poolRoutine
added a simple map of heights to keep track of what will need to be requested next
added a few more tests
* send errors to FSM in a different channel than blocks
send errors (RemovePeer) from switch on a different channel than the
one receiving blocks
renamed channels
added more pool tests
* more pool tests
* lint errors
* more tests
* more tests
* switch fast sync to new implementation
* fixed data race in tests
* cleanup
* finished fsm tests
* address golangci comments :)
* address golangci comments :)
* Added timeout on next block needed to advance
* updating docs and cleanup
* fix issue in test from previous cleanup
* cleanup
* Added termination scenarios, tests and more cleanup
* small fixes to adr, comments and cleanup
* Fix bug in sendRequest()
If we tried to send a request to a peer not present in the switch, a
missing continue statement caused the request to be blackholed in a peer
that was removed and never retried.
While this bug was manifesting, the reactor kept asking for other
blocks that would be stored and never consumed. Added the number of
unconsumed blocks in the math for requesting blocks ahead of current
processing height so eventually there will be no more blocks requested
until the already received ones are consumed.
* remove bpPeer's didTimeout field
* Use distinct err codes for peer timeout and FSM timeouts
* Don't allow peers to update with lower height
* review comments from Ethan and Zarko
* some cleanup, renaming, comments
* Move block execution in separate goroutine
* Remove pool's numPending
* review comments
* fix lint, remove old blockchain reactor and duplicates in fsm tests
* small reorg around peer after review comments
* add the reactor spec
* verify block only once
* review comments
* change to int for max number of pending requests
* cleanup and godoc
* Add configuration flag fast sync version
* golangci fixes
* fix config template
* move both reactor versions under blockchain
* cleanup, golint, renaming stuff
* updated documentation, fixed more golint warnings
* integrate with behavior package
* sync with master
* gofmt
* add changelog_pending entry
* move to improvments
* suggestion to changelog entry
Fixes#3457
The topic of the issue is that : write a BlockRequest int requestsCh channel will create an timer at the same time that stop the peer 15s later if no block have been received . But pop a BlockRequest from requestsCh and send it out may delay more than 15s later. So that the peer will be stopped for error("send nothing to us").
Extracting requestsCh into its own goroutine can make sure that every BlockRequest been handled timely.
Instead of the requestsCh handling, we should probably pull the didProcessCh handling in a separate go routine since this is the one "starving" the other channel handlers. I believe the way it is right now, we still have issues with high delays in errorsCh handling that might cause sending requests to invalid/ disconnected peers.
* validate reactor messages
Refs #2683
* validate blockchain messages
Refs #2683
* validate evidence messages
Refs #2683
* todo
* check ProposalPOL and signature sizes
* add a changelog entry
* check addr is valid when we add it to the addrbook
* validate incoming netAddr (not just nil check!)
* fixes after Bucky's review
* check timestamps
* beef up block#ValidateBasic
* move some checks into bcBlockResponseMessage
* update Gopkg.lock
Fix
```
grouped write of manifest, lock and vendor: failed to export github.com/tendermint/go-amino: fatal: failed to unpack tree object 6dcc6ddc14
```
by running `dep ensure -update`
* bump year since now we check it
* generate test/p2p/data on the fly using tendermint testnet
* allow sync chains older than 1 year
* use full path when creating a testnet
* move testnet gen to test/docker/Dockerfile
* relax LastCommitRound check
Refs #2737
* fix conflicts after merge
* add small comment
* some ValidateBasic updates
* fixes
* AppHash length is not fixed
* remove ConsensusParams.TxSize and ConsensusParams.BlockGossip
Refs #2347
* block part size is now fixed
Refs #2347
* use max data size, not max bytes for tx limit
Refs #2347