## Description
in consensus/state.go, when calulating metrics, retrieve address (ergo, pubkey) once prior to iterating over validatorset to ensure we do not make excessive calls to signer.
Partially closes: #4865
## Description
In https://github.com/tendermint/tendermint/pull/4466 a new type was created (EventAttribute) that replaced KV.Pair. This made the type Pair deadcode, this pr removes that type.
I don't think a changelog entry is needed since it is not being used anymore. But will add a section to the upgrading.md to let users know that it was replaced
Closes: #XXX
## Description
bech32 is only used in the sdk. I moved Bech32 to the sdk and we can remove it here. Don't think this needs to go into a minor release
Closes: #XXX
Fixes#4802. The Go HTTP server has a global panic handler for requests, so it was not as severe as first thought.
This fix can still panic, since we try to send a `500` response - if that happens, the Go HTTP server will terminate the connection. Otherwise, the client will get a 200 response, which we should avoid. I'm sort of torn on whether it's even necessary to include this fix, instead of just letting the HTTP server deal with it.
Closes#4783
It looks like we're validating Commit twice. Also, height and blockID params were coming from the commit, so no need to pass them separately.
This is useful for custom state sync `StateProvider` implementations that need to build new states for bootstrapping the node. It will also be useful when implementing other mechanisms to bootstrap nodes, e.g. #4642 and #3713.
The event loop uses a `select` on multiple channels. However, reading from a closed channel in Go always yields the channel's zero value. The processor and scheduler close their channels when done, and since these channels are always ready to receive, the event loop keeps spinning on them.
This changes `routine.terminate()` to not close the channel, and also removes `stopDemux` and instead uses `events` channel closure to signal event loop termination.
Fixes#4687.
Fixes#828. Adds state sync, as outlined in [ADR-053](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-053-state-sync-prototype.md). See related PRs in Cosmos SDK (https://github.com/cosmos/cosmos-sdk/pull/5803) and Gaia (https://github.com/cosmos/gaia/pull/327).
This is split out of the previous PR #4645, and branched off of the ABCI interface in #4704.
* Adds a new P2P reactor which exchanges snapshots with peers, and bootstraps an empty local node from remote snapshots when requested.
* Adds a new configuration section `[statesync]` that enables state sync and configures the light client. Also enables `statesync:info` logging by default.
* Integrates state sync into node startup. Does not support the v2 blockchain reactor, since it needs some reorganization to defer startup.
Fixes#2593@alexanderbez This is used in a single place in the SDK, how upset are you about removing it?
______
For contributor use:
- [x] ~Wrote tests~
- [x] Updated CHANGELOG_PENDING.md
- [x] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [x] Updated relevant documentation (`docs/`) and code comments
- [x] Re-reviewed `Files changed` in the Github PR explorer
- [x] Applied Appropriate Labels
Reduce the number of targets and make the buildsystem more
flexible by parsing the TENDERMINT_BUILD_OPTIONS command
line variable (a-la Debian, inspired by dpkg-buildpackage's
DEB_BUILD_OPTIONS), e.g:
$ make install TENDERMINT_BUILD_OPTIONS='cleveldb'
replaces the old:
$ make install_c
Options can be mix&match'd, e.g.:
$ make install TENDERMINT_BUILD_OPTIONS='cleveldb race nostrip'
Three options are available:
- nostrip: don't strip debugging symbols nor DWARF tables.
- cleveldb: use cleveldb as db backend instead of goleveldb;
it switches on the CGO_ENABLED Go environment variale.
- race: pass -race to go build and enable data race detection.
This changeset is a port of gaia pull request: cosmos/gaia#363.
Fixes#4739, kind of. See #4740 for the proper fix.
---
For contributor use:
- [x] Wrote tests
- [x] Updated CHANGELOG_PENDING.md
- [x] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [x] Updated relevant documentation (`docs/`) and code comments
- [x] Re-reviewed `Files changed` in the Github PR explorer
- [x] Applied Appropriate Labels
merged the existing store into pool, consolidated the three buckets into two, used block height as a marked for committed evidence, evidence list recovers on start up, improved error handling
- Move core stateless validation of the Header type to a ValidateBasic method.
- Call header.ValidateBasic during a SignedHeader validation.
- Call header.ValidateBasic during a PhantomValidatorEvidence validation.
- Call header.ValidateBasic during a LunaticValidatorEvidence validation.
lite tests are skipped since the package is deprecated, no need to waste time on it
closes: #4572
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
Closes: #4530
This PR contains logic for both submitting an evidence by the light client (lite2 package) and receiving it on the Tendermint side (/broadcast_evidence RPC and/or EvidenceReactor#Receive). Upon receiving the ConflictingHeadersEvidence (introduced by this PR), the Tendermint validates it, then breaks it down into smaller pieces (DuplicateVoteEvidence, LunaticValidatorEvidence, PhantomValidatorEvidence, PotentialAmnesiaEvidence). Afterwards, each piece of evidence is verified against the state of the full node and added to the pool, from which it's reaped upon block creation.
* rpc/client: do not pass height param if height ptr is nil
* rpc/core: validate incoming evidence!
* only accept ConflictingHeadersEvidence if one
of the headers is committed from this full node's perspective
This simplifies the code. Plus, if there are multiple forks, we'll
likely to receive multiple ConflictingHeadersEvidence anyway.
* swap CommitSig with Vote in LunaticValidatorEvidence
Vote is needed to validate signature
* no need to embed client
http is a provider and should not be used as a client
Closes: #4695
Verify /block_results and /validators responses from an HTTP client using the light client.
Added count and total to /validators response.
Refs #3113
See #4588 for original change.
I believe this is appropriate. Anything else that needs to be updated?
______
For contributor use:
- [ ] ~Wrote tests~
- [x] Updated CHANGELOG_PENDING.md
- [x] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] ~Updated relevant documentation (`docs/`) and code comments~
- [x] Re-reviewed `Files changed` in the Github PR explorer
How to use it:
```
$ . <(tendermint completion)
```
Note that the completion command does not show up in the help screen,
though it comes with its own --help option.
This is a port of the feature provided by cosmos-sdk.
Add Has function, create better handling of errors when adding evidence, usage of error types.
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* mark unsolicited and too frequent messaged as bad
* add tests
* update changelog and fix error
* revised error types
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Added BlockStore.DeleteBlock()
* Added initial block pruner prototype
* wip
* Added BlockStore.PruneBlocks()
* Added consensus setting for block pruning
* Added BlockStore base
* Error on replay if base does not have blocks
* Handle missing blocks when sending VoteSetMaj23Message
* Error message tweak
* Properly update blockstore state
* Error message fix again
* blockchain: ignore peer missing blocks
* Added FIXME
* Added test for block replay with truncated history
* Handle peer base in blockchain reactor
* Improved replay error handling
* Added tests for Store.PruneBlocks()
* Fix non-RPC handling of truncated block history
* Panic on missing block meta in needProofBlock()
* Updated changelog
* Handle truncated block history in RPC layer
* Added info about earliest block in /status RPC
* Reorder height and base in blockchain reactor messages
* Updated changelog
* Fix tests
* Appease linter
* Minor review fixes
* Non-empty BlockStores should always have base > 0
* Update code to assume base > 0 invariant
* Added blockstore tests for pruning to 0
* Make sure we don't prune below the current base
* Added BlockStore.Size()
* config: added retain_blocks recommendations
* Update v1 blockchain reactor to handle blockstore base
* Added state database pruning
* Propagate errors on missing validator sets
* Comment tweaks
* Improved error message
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* use ABCI field ResponseCommit.retain_height instead of retain-blocks config option
* remove State.RetainHeight, return value instead
* fix minor issues
* rename pruneHeights() to pruneBlocks()
* noop to fix GitHub borkage
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
Closes: #4537
Uses SignedHeaderBefore to find header before unverified header and then bisection to verify the header. Only when header is between first and last trusted header height else if before the first trusted header height then regular backwards verification is used.
* proto: use docker to generate stubs
- provide an option to developers to use docker to generate proto stubs
closes#4579
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
Previously, many reactors were initialized with the name "Reactor," which made it difficult to log which reactor was doing what. This changes those reactors' names to something more descriptive.
Closes: #4546
The algorithm uses an array to store the headers and validators and populates it at every bisection (which is an unsuccessful verification). When a successful verification finally occurs it updates the new trusted header, trims that header from the cache (the array) and sets the depth pointer back to 0. Instead of retrieving new headers it will use the cached headers, incrementing in depth until it reaches the end of the cache which by then it will start to retrieve new headers from the provider.
Mathematically, this method doesn't properly bisect after the first round but it will always choose a pivot header that is within 1/8th of the upper header's height. I.e. if we are trying to jump 128 headers, the maximum offset from bisection height (64) is 64 + 16(128/8) = 80, therefore a better heuristic would be to obtain the new pivot header height as the middle of these two numbers which would therefore mean to multiply it by 9/16ths instead of 1/2 (sorry this might be a bit more complicated in writing but I can try better explain if someone is interested). Therefore I would also, upon consensus, propose that we change the pivot height to 9/16th's of the previous height