@ -1,868 +0,0 @@ | |||
# Changelog | |||
## TBD | |||
FEATURES: | |||
- [node] added metrics (served under /metrics using a Prometheus client; disabled by default) | |||
## 0.20.1 | |||
BUG FIXES: | |||
- [rpc] fix memory leak in Websocket (when using `/subscribe` method) | |||
## 0.20.0 | |||
*June 6th, 2018* | |||
This is the first in a series of breaking releases coming to Tendermint after | |||
soliciting developer feedback and conducting security audits. | |||
This release does not break any blockchain data structures or | |||
protocols other than the ABCI messages between Tendermint and the application. | |||
Applications that upgrade for ABCI v0.11.0 should be able to continue running Tendermint | |||
v0.20.0 on blockchains created with v0.19.X | |||
BREAKING CHANGES | |||
- [abci] Upgrade to | |||
[v0.11.0](https://github.com/tendermint/abci/blob/master/CHANGELOG.md#0110) | |||
- [abci] Change Query path for filtering peers by node ID from | |||
`p2p/filter/pubkey/<id>` to `p2p/filter/id/<id>` | |||
## 0.19.9 | |||
*June 5th, 2018* | |||
BREAKING CHANGES | |||
- [types/priv_validator] Moved to top level `privval` package | |||
FEATURES | |||
- [config] Collapse PeerConfig into P2PConfig | |||
- [docs] Add quick-install script | |||
- [docs/spec] Add table of Amino prefixes | |||
BUG FIXES | |||
- [rpc] Return 404 for unknown endpoints | |||
- [consensus] Flush WAL on stop | |||
- [evidence] Don't send evidence to peers that are behind | |||
- [p2p] Fix memory leak on peer disconnects | |||
- [rpc] Fix panic when `per_page=0` | |||
## 0.19.8 | |||
*June 4th, 2018* | |||
BREAKING: | |||
- [p2p] Remove `auth_enc` config option, peer connections are always auth | |||
encrypted. Technically a breaking change but seems no one was using it and | |||
arguably a bug fix :) | |||
BUG FIXES | |||
- [mempool] Fix deadlock under high load when `skip_timeout_commit=true` and | |||
`create_empty_blocks=false` | |||
## 0.19.7 | |||
*May 31st, 2018* | |||
BREAKING: | |||
- [libs/pubsub] TagMap#Get returns a string value | |||
- [libs/pubsub] NewTagMap accepts a map of strings | |||
FEATURES | |||
- [rpc] the RPC documentation is now published to https://tendermint.github.io/slate | |||
- [p2p] AllowDuplicateIP config option to refuse connections from same IP. | |||
- true by default for now, false by default in next breaking release | |||
- [docs] Add docs for query, tx indexing, events, pubsub | |||
- [docs] Add some notes about running Tendermint in production | |||
IMPROVEMENTS: | |||
- [consensus] Consensus reactor now receives events from a separate synchronous event bus, | |||
which is not dependant on external RPC load | |||
- [consensus/wal] do not look for height in older files if we've seen height - 1 | |||
- [docs] Various cleanup and link fixes | |||
## 0.19.6 | |||
*May 29th, 2018* | |||
BUG FIXES | |||
- [blockchain] Fix fast-sync deadlock during high peer turnover | |||
BUG FIX: | |||
- [evidence] Dont send peers evidence from heights they haven't synced to yet | |||
- [p2p] Refuse connections to more than one peer with the same IP | |||
- [docs] Various fixes | |||
## 0.19.5 | |||
*May 20th, 2018* | |||
BREAKING CHANGES | |||
- [rpc/client] TxSearch and UnconfirmedTxs have new arguments (see below) | |||
- [rpc/client] TxSearch returns ResultTxSearch | |||
- [version] Breaking changes to Go APIs will not be reflected in breaking | |||
version change, but will be included in changelog. | |||
FEATURES | |||
- [rpc] `/tx_search` takes `page` (starts at 1) and `per_page` (max 100, default 30) args to paginate results | |||
- [rpc] `/unconfirmed_txs` takes `limit` (max 100, default 30) arg to limit the output | |||
- [config] `mempool.size` and `mempool.cache_size` options | |||
IMPROVEMENTS | |||
- [docs] Lots of updates | |||
- [consensus] Only Fsync() the WAL before executing msgs from ourselves | |||
BUG FIXES | |||
- [mempool] Enforce upper bound on number of transactions | |||
## 0.19.4 (May 17th, 2018) | |||
IMPROVEMENTS | |||
- [state] Improve tx indexing by using batches | |||
- [consensus, state] Improve logging (more consensus logs, fewer tx logs) | |||
- [spec] Moved to `docs/spec` (TODO cleanup the rest of the docs ...) | |||
BUG FIXES | |||
- [consensus] Fix issue #1575 where a late proposer can get stuck | |||
## 0.19.3 (May 14th, 2018) | |||
FEATURES | |||
- [rpc] New `/consensus_state` returns just the votes seen at the current height | |||
IMPROVEMENTS | |||
- [rpc] Add stringified votes and fraction of power voted to `/dump_consensus_state` | |||
- [rpc] Add PeerStateStats to `/dump_consensus_state` | |||
BUG FIXES | |||
- [cmd] Set GenesisTime during `tendermint init` | |||
- [consensus] fix ValidBlock rules | |||
## 0.19.2 (April 30th, 2018) | |||
FEATURES: | |||
- [p2p] Allow peers with different Minor versions to connect | |||
- [rpc] `/net_info` includes `n_peers` | |||
IMPROVEMENTS: | |||
- [p2p] Various code comments, cleanup, error types | |||
- [p2p] Change some Error logs to Debug | |||
BUG FIXES: | |||
- [p2p] Fix reconnect to persistent peer when first dial fails | |||
- [p2p] Validate NodeInfo.ListenAddr | |||
- [p2p] Only allow (MaxNumPeers - MaxNumOutboundPeers) inbound peers | |||
- [p2p/pex] Limit max msg size to 64kB | |||
- [p2p] Fix panic when pex=false | |||
- [p2p] Allow multiple IPs per ID in AddrBook | |||
- [p2p] Fix before/after bugs in addrbook isBad() | |||
## 0.19.1 (April 27th, 2018) | |||
Note this release includes some small breaking changes in the RPC and one in the | |||
config that are really bug fixes. v0.19.1 will work with existing chains, and make Tendermint | |||
easier to use and debug. With <3 | |||
BREAKING (MINOR) | |||
- [config] Removed `wal_light` setting. If you really needed this, let us know | |||
FEATURES: | |||
- [networks] moved in tooling from devops repo: terraform and ansible scripts for deploying testnets ! | |||
- [cmd] Added `gen_node_key` command | |||
BUG FIXES | |||
Some of these are breaking in the RPC response, but they're really bugs! | |||
- [spec] Document address format and pubkey encoding pre and post Amino | |||
- [rpc] Lower case JSON field names | |||
- [rpc] Fix missing entries, improve, and lower case the fields in `/dump_consensus_state` | |||
- [rpc] Fix NodeInfo.Channels format to hex | |||
- [rpc] Add Validator address to `/status` | |||
- [rpc] Fix `prove` in ABCIQuery | |||
- [cmd] MarshalJSONIndent on init | |||
## 0.19.0 (April 13th, 2018) | |||
BREAKING: | |||
- [cmd] improved `testnet` command; now it can fill in `persistent_peers` for you in the config file and much more (see `tendermint testnet --help` for details) | |||
- [cmd] `show_node_id` now returns an error if there is no node key | |||
- [rpc]: changed the output format for the `/status` endpoint (see https://godoc.org/github.com/tendermint/tendermint/rpc/core#Status) | |||
Upgrade from go-wire to go-amino. This is a sweeping change that breaks everything that is | |||
serialized to disk or over the network. | |||
See github.com/tendermint/go-amino for details on the new format. | |||
See `scripts/wire2amino.go` for a tool to upgrade | |||
genesis/priv_validator/node_key JSON files. | |||
FEATURES | |||
- [test] docker-compose for local testnet setup (thanks Greg!) | |||
## 0.18.0 (April 6th, 2018) | |||
BREAKING: | |||
- [types] Merkle tree uses different encoding for varints (see tmlibs v0.8.0) | |||
- [types] ValidtorSet.GetByAddress returns -1 if no validator found | |||
- [p2p] require all addresses come with an ID no matter what | |||
- [rpc] Listening address must contain tcp:// or unix:// prefix | |||
FEATURES: | |||
- [rpc] StartHTTPAndTLSServer (not used yet) | |||
- [rpc] Include validator's voting power in `/status` | |||
- [rpc] `/tx` and `/tx_search` responses now include the transaction hash | |||
- [rpc] Include peer NodeIDs in `/net_info` | |||
IMPROVEMENTS: | |||
- [config] trim whitespace from elements of lists (like `persistent_peers`) | |||
- [rpc] `/tx_search` results are sorted by height | |||
- [p2p] do not try to connect to ourselves (ok, maybe only once) | |||
- [p2p] seeds respond with a bias towards good peers | |||
BUG FIXES: | |||
- [rpc] fix subscribing using an abci.ResponseDeliverTx tag | |||
- [rpc] fix tx_indexers matchRange | |||
- [rpc] fix unsubscribing (see tmlibs v0.8.0) | |||
## 0.17.1 (March 27th, 2018) | |||
BUG FIXES: | |||
- [types] Actually support `app_state` in genesis as `AppStateJSON` | |||
## 0.17.0 (March 27th, 2018) | |||
BREAKING: | |||
- [types] WriteSignBytes -> SignBytes | |||
IMPROVEMENTS: | |||
- [all] renamed `dummy` (`persistent_dummy`) to `kvstore` (`persistent_kvstore`) (name "dummy" is deprecated and will not work in the next breaking release) | |||
- [docs] note on determinism (docs/determinism.rst) | |||
- [genesis] `app_options` field is deprecated. please rename it to `app_state` in your genesis file(s). `app_options` will not work in the next breaking release | |||
- [p2p] dial seeds directly without potential peers | |||
- [p2p] exponential backoff for addrs in the address book | |||
- [p2p] mark peer as good if it contributed enough votes or block parts | |||
- [p2p] stop peer if it sends incorrect data, msg to unknown channel, msg we did not expect | |||
- [p2p] when `auth_enc` is true, all dialed peers must have a node ID in their address | |||
- [spec] various improvements | |||
- switched from glide to dep internally for package management | |||
- [wire] prep work for upgrading to new go-wire (which is now called go-amino) | |||
FEATURES: | |||
- [config] exposed `auth_enc` flag to enable/disable encryption | |||
- [config] added the `--p2p.private_peer_ids` flag and `PrivatePeerIDs` config variable (see config for description) | |||
- [rpc] added `/health` endpoint, which returns empty result for now | |||
- [types/priv_validator] new format and socket client, allowing for remote signing | |||
BUG FIXES: | |||
- [consensus] fix liveness bug by introducing ValidBlock mechanism | |||
## 0.16.0 (February 20th, 2018) | |||
BREAKING CHANGES: | |||
- [config] use $TMHOME/config for all config and json files | |||
- [p2p] old `--p2p.seeds` is now `--p2p.persistent_peers` (persistent peers to which TM will always connect to) | |||
- [p2p] now `--p2p.seeds` only used for getting addresses (if addrbook is empty; not persistent) | |||
- [p2p] NodeInfo: remove RemoteAddr and add Channels | |||
- we must have at least one overlapping channel with peer | |||
- we only send msgs for channels the peer advertised | |||
- [p2p/conn] pong timeout | |||
- [lite] comment out IAVL related code | |||
FEATURES: | |||
- [p2p] added new `/dial_peers&persistent=_` **unsafe** endpoint | |||
- [p2p] persistent node key in `$THMHOME/config/node_key.json` | |||
- [p2p] introduce peer ID and authenticate peers by ID using addresses like `ID@IP:PORT` | |||
- [p2p/pex] new seed mode crawls the network and serves as a seed. | |||
- [config] MempoolConfig.CacheSize | |||
- [config] P2P.SeedMode (`--p2p.seed_mode`) | |||
IMPROVEMENT: | |||
- [p2p/pex] stricter rules in the PEX reactor for better handling of abuse | |||
- [p2p] various improvements to code structure including subpackages for `pex` and `conn` | |||
- [docs] new spec! | |||
- [all] speed up the tests! | |||
BUG FIX: | |||
- [blockchain] StopPeerForError on timeout | |||
- [consensus] StopPeerForError on a bad Maj23 message | |||
- [state] flush mempool conn before calling commit | |||
- [types] fix priv val signing things that only differ by timestamp | |||
- [mempool] fix memory leak causing zombie peers | |||
- [p2p/conn] fix potential deadlock | |||
## 0.15.0 (December 29, 2017) | |||
BREAKING CHANGES: | |||
- [p2p] enable the Peer Exchange reactor by default | |||
- [types] add Timestamp field to Proposal/Vote | |||
- [types] add new fields to Header: TotalTxs, ConsensusParamsHash, LastResultsHash, EvidenceHash | |||
- [types] add Evidence to Block | |||
- [types] simplify ValidateBasic | |||
- [state] updates to support changes to the header | |||
- [state] Enforce <1/3 of validator set can change at a time | |||
FEATURES: | |||
- [state] Send indices of absent validators and addresses of byzantine validators in BeginBlock | |||
- [state] Historical ConsensusParams and ABCIResponses | |||
- [docs] Specification for the base Tendermint data structures. | |||
- [evidence] New evidence reactor for gossiping and managing evidence | |||
- [rpc] `/block_results?height=X` returns the DeliverTx results for a given height. | |||
IMPROVEMENTS: | |||
- [consensus] Better handling of corrupt WAL file | |||
BUG FIXES: | |||
- [lite] fix race | |||
- [state] validate block.Header.ValidatorsHash | |||
- [p2p] allow seed addresses to be prefixed with eg. `tcp://` | |||
- [p2p] use consistent key to refer to peers so we dont try to connect to existing peers | |||
- [cmd] fix `tendermint init` to ignore files that are there and generate files that aren't. | |||
## 0.14.0 (December 11, 2017) | |||
BREAKING CHANGES: | |||
- consensus/wal: removed separator | |||
- rpc/client: changed Subscribe/Unsubscribe/UnsubscribeAll funcs signatures to be identical to event bus. | |||
FEATURES: | |||
- new `tendermint lite` command (and `lite/proxy` pkg) for running a light-client RPC proxy. | |||
NOTE it is currently insecure and its APIs are not yet covered by semver | |||
IMPROVEMENTS: | |||
- rpc/client: can act as event bus subscriber (See https://github.com/tendermint/tendermint/issues/945). | |||
- p2p: use exponential backoff from seconds to hours when attempting to reconnect to persistent peer | |||
- config: moniker defaults to the machine's hostname instead of "anonymous" | |||
BUG FIXES: | |||
- p2p: no longer exit if one of the seed addresses is incorrect | |||
## 0.13.0 (December 6, 2017) | |||
BREAKING CHANGES: | |||
- abci: update to v0.8 using gogo/protobuf; includes tx tags, vote info in RequestBeginBlock, data.Bytes everywhere, use int64, etc. | |||
- types: block heights are now `int64` everywhere | |||
- types & node: EventSwitch and EventCache have been replaced by EventBus and EventBuffer; event types have been overhauled | |||
- node: EventSwitch methods now refer to EventBus | |||
- rpc/lib/types: RPCResponse is no longer a pointer; WSRPCConnection interface has been modified | |||
- rpc/client: WaitForOneEvent takes an EventsClient instead of types.EventSwitch | |||
- rpc/client: Add/RemoveListenerForEvent are now Subscribe/Unsubscribe | |||
- rpc/core/types: ResultABCIQuery wraps an abci.ResponseQuery | |||
- rpc: `/subscribe` and `/unsubscribe` take `query` arg instead of `event` | |||
- rpc: `/status` returns the LatestBlockTime in human readable form instead of in nanoseconds | |||
- mempool: cached transactions return an error instead of an ABCI response with BadNonce | |||
FEATURES: | |||
- rpc: new `/unsubscribe_all` WebSocket RPC endpoint | |||
- rpc: new `/tx_search` endpoint for filtering transactions by more complex queries | |||
- p2p/trust: new trust metric for tracking peers. See ADR-006 | |||
- config: TxIndexConfig allows to set what DeliverTx tags to index | |||
IMPROVEMENTS: | |||
- New asynchronous events system using `tmlibs/pubsub` | |||
- logging: Various small improvements | |||
- consensus: Graceful shutdown when app crashes | |||
- tests: Fix various non-deterministic errors | |||
- p2p: more defensive programming | |||
BUG FIXES: | |||
- consensus: fix panic where prs.ProposalBlockParts is not initialized | |||
- p2p: fix panic on bad channel | |||
## 0.12.1 (November 27, 2017) | |||
BUG FIXES: | |||
- upgrade tmlibs dependency to enable Windows builds for Tendermint | |||
## 0.12.0 (October 27, 2017) | |||
BREAKING CHANGES: | |||
- rpc/client: websocket ResultsCh and ErrorsCh unified in ResponsesCh. | |||
- rpc/client: ABCIQuery no longer takes `prove` | |||
- state: remove GenesisDoc from state. | |||
- consensus: new binary WAL format provides efficiency and uses checksums to detect corruption | |||
- use scripts/wal2json to convert to json for debugging | |||
FEATURES: | |||
- new `certifiers` pkg contains the tendermint light-client library (name subject to change)! | |||
- rpc: `/genesis` includes the `app_options` . | |||
- rpc: `/abci_query` takes an additional `height` parameter to support historical queries. | |||
- rpc/client: new ABCIQueryWithOptions supports options like `trusted` (set false to get a proof) and `height` to query a historical height. | |||
IMPROVEMENTS: | |||
- rpc: `/genesis` result includes `app_options` | |||
- rpc/lib/client: add jitter to reconnects. | |||
- rpc/lib/types: `RPCError` satisfies the `error` interface. | |||
BUG FIXES: | |||
- rpc/client: fix ws deadlock after stopping | |||
- blockchain: fix panic on AddBlock when peer is nil | |||
- mempool: fix sending on TxsAvailable when a tx has been invalidated | |||
- consensus: dont run WAL catchup if we fast synced | |||
## 0.11.1 (October 10, 2017) | |||
IMPROVEMENTS: | |||
- blockchain/reactor: respondWithNoResponseMessage for missing height | |||
BUG FIXES: | |||
- rpc: fixed client WebSocket timeout | |||
- rpc: client now resubscribes on reconnection | |||
- rpc: fix panics on missing params | |||
- rpc: fix `/dump_consensus_state` to have normal json output (NOTE: technically breaking, but worth a bug fix label) | |||
- types: fixed out of range error in VoteSet.addVote | |||
- consensus: fix wal autofile via https://github.com/tendermint/tmlibs/blob/master/CHANGELOG.md#032-october-2-2017 | |||
## 0.11.0 (September 22, 2017) | |||
BREAKING: | |||
- genesis file: validator `amount` is now `power` | |||
- abci: Info, BeginBlock, InitChain all take structs | |||
- rpc: various changes to match JSONRPC spec (http://www.jsonrpc.org/specification), including breaking ones: | |||
- requests that previously returned HTTP code 4XX now return 200 with an error code in the JSONRPC. | |||
- `rpctypes.RPCResponse` uses new `RPCError` type instead of `string`. | |||
- cmd: if there is no genesis, exit immediately instead of waiting around for one to show. | |||
- types: `Signer.Sign` returns an error. | |||
- state: every validator set change is persisted to disk, which required some changes to the `State` structure. | |||
- p2p: new `p2p.Peer` interface used for all reactor methods (instead of `*p2p.Peer` struct). | |||
FEATURES: | |||
- rpc: `/validators?height=X` allows querying of validators at previous heights. | |||
- rpc: Leaving the `height` param empty for `/block`, `/validators`, and `/commit` will return the value for the latest height. | |||
IMPROVEMENTS: | |||
- docs: Moved all docs from the website and tools repo in, converted to `.rst`, and cleaned up for presentation on `tendermint.readthedocs.io` | |||
BUG FIXES: | |||
- fix WAL openning issue on Windows | |||
## 0.10.4 (September 5, 2017) | |||
IMPROVEMENTS: | |||
- docs: Added Slate docs to each rpc function (see rpc/core) | |||
- docs: Ported all website docs to Read The Docs | |||
- config: expose some p2p params to tweak performance: RecvRate, SendRate, and MaxMsgPacketPayloadSize | |||
- rpc: Upgrade the websocket client and server, including improved auto reconnect, and proper ping/pong | |||
BUG FIXES: | |||
- consensus: fix panic on getVoteBitArray | |||
- consensus: hang instead of panicking on byzantine consensus failures | |||
- cmd: dont load config for version command | |||
## 0.10.3 (August 10, 2017) | |||
FEATURES: | |||
- control over empty block production: | |||
- new flag, `--consensus.create_empty_blocks`; when set to false, blocks are only created when there are txs or when the AppHash changes. | |||
- new config option, `consensus.create_empty_blocks_interval`; an empty block is created after this many seconds. | |||
- in normal operation, `create_empty_blocks = true` and `create_empty_blocks_interval = 0`, so blocks are being created all the time (as in all previous versions of tendermint). The number of empty blocks can be reduced by increasing `create_empty_blocks_interval` or by setting `create_empty_blocks = false`. | |||
- new `TxsAvailable()` method added to Mempool that returns a channel which fires when txs are available. | |||
- new heartbeat message added to consensus reactor to notify peers that a node is waiting for txs before entering propose step. | |||
- rpc: Add `syncing` field to response returned by `/status`. Is `true` while in fast-sync mode. | |||
IMPROVEMENTS: | |||
- various improvements to documentation and code comments | |||
BUG FIXES: | |||
- mempool: pass height into constructor so it doesn't always start at 0 | |||
## 0.10.2 (July 10, 2017) | |||
FEATURES: | |||
- Enable lower latency block commits by adding consensus reactor sleep durations and p2p flush throttle timeout to the config | |||
IMPROVEMENTS: | |||
- More detailed logging in the consensus reactor and state machine | |||
- More in-code documentation for many exposed functions, especially in consensus/reactor.go and p2p/switch.go | |||
- Improved readability for some function definitions and code blocks with long lines | |||
## 0.10.1 (June 28, 2017) | |||
FEATURES: | |||
- Use `--trace` to get stack traces for logged errors | |||
- types: GenesisDoc.ValidatorHash returns the hash of the genesis validator set | |||
- types: GenesisDocFromFile parses a GenesiDoc from a JSON file | |||
IMPROVEMENTS: | |||
- Add a Code of Conduct | |||
- Variety of improvements as suggested by `megacheck` tool | |||
- rpc: deduplicate tests between rpc/client and rpc/tests | |||
- rpc: addresses without a protocol prefix default to `tcp://`. `http://` is also accepted as an alias for `tcp://` | |||
- cmd: commands are more easily reuseable from other tools | |||
- DOCKER: automate build/push | |||
BUG FIXES: | |||
- Fix log statements using keys with spaces (logger does not currently support spaces) | |||
- rpc: set logger on websocket connection | |||
- rpc: fix ws connection stability by setting write deadline on pings | |||
## 0.10.0 (June 2, 2017) | |||
Includes major updates to configuration, logging, and json serialization. | |||
Also includes the Grand Repo-Merge of 2017. | |||
BREAKING CHANGES: | |||
- Config and Flags: | |||
- The `config` map is replaced with a [`Config` struct](https://github.com/tendermint/tendermint/blob/master/config/config.go#L11), | |||
containing substructs: `BaseConfig`, `P2PConfig`, `MempoolConfig`, `ConsensusConfig`, `RPCConfig` | |||
- This affects the following flags: | |||
- `--seeds` is now `--p2p.seeds` | |||
- `--node_laddr` is now `--p2p.laddr` | |||
- `--pex` is now `--p2p.pex` | |||
- `--skip_upnp` is now `--p2p.skip_upnp` | |||
- `--rpc_laddr` is now `--rpc.laddr` | |||
- `--grpc_laddr` is now `--rpc.grpc_laddr` | |||
- Any configuration option now within a substract must come under that heading in the `config.toml`, for instance: | |||
``` | |||
[p2p] | |||
laddr="tcp://1.2.3.4:46656" | |||
[consensus] | |||
timeout_propose=1000 | |||
``` | |||
- Use viper and `DefaultConfig() / TestConfig()` functions to handle defaults, and remove `config/tendermint` and `config/tendermint_test` | |||
- Change some function and method signatures to | |||
- Change some [function and method signatures](https://gist.github.com/ebuchman/640d5fc6c2605f73497992fe107ebe0b) accomodate new config | |||
- Logger | |||
- Replace static `log15` logger with a simple interface, and provide a new implementation using `go-kit`. | |||
See our new [logging library](https://github.com/tendermint/tmlibs/log) and [blog post](https://tendermint.com/blog/abstracting-the-logger-interface-in-go) for more details | |||
- Levels `warn` and `notice` are removed (you may need to change them in your `config.toml`!) | |||
- Change some [function and method signatures](https://gist.github.com/ebuchman/640d5fc6c2605f73497992fe107ebe0b) to accept a logger | |||
- JSON serialization: | |||
- Replace `[TypeByte, Xxx]` with `{"type": "some-type", "data": Xxx}` in RPC and all `.json` files by using `go-wire/data`. For instance, a public key is now: | |||
``` | |||
"pub_key": { | |||
"type": "ed25519", | |||
"data": "83DDF8775937A4A12A2704269E2729FCFCD491B933C4B0A7FFE37FE41D7760D0" | |||
} | |||
``` | |||
- Remove type information about RPC responses, so `[TypeByte, {"jsonrpc": "2.0", ... }]` is now just `{"jsonrpc": "2.0", ... }` | |||
- Change `[]byte` to `data.Bytes` in all serialized types (for hex encoding) | |||
- Lowercase the JSON tags in `ValidatorSet` fields | |||
- Introduce `EventDataInner` for serializing events | |||
- Other: | |||
- Send InitChain message in handshake if `appBlockHeight == 0` | |||
- Do not include the `Accum` field when computing the validator hash. This makes the ValidatorSetHash unique for a given validator set, rather than changing with every block (as the Accum changes) | |||
- Unsafe RPC calls are not enabled by default. This includes `/dial_seeds`, and all calls prefixed with `unsafe`. Use the `--rpc.unsafe` flag to enable. | |||
FEATURES: | |||
- Per-module log levels. For instance, the new default is `state:info,*:error`, which means the `state` package logs at `info` level, and everything else logs at `error` level | |||
- Log if a node is validator or not in every consensus round | |||
- Use ldflags to set git hash as part of the version | |||
- Ignore `address` and `pub_key` fields in `priv_validator.json` and overwrite them with the values derrived from the `priv_key` | |||
IMPROVEMENTS: | |||
- Merge `tendermint/go-p2p -> tendermint/tendermint/p2p` and `tendermint/go-rpc -> tendermint/tendermint/rpc/lib` | |||
- Update paths for grand repo merge: | |||
- `go-common -> tmlibs/common` | |||
- `go-data -> go-wire/data` | |||
- All other `go-` libs, except `go-crypto` and `go-wire`, are merged under `tmlibs` | |||
- No global loggers (loggers are passed into constructors, or preferably set with a SetLogger method) | |||
- Return HTTP status codes with errors for RPC responses | |||
- Limit `/blockchain_info` call to return a maximum of 20 blocks | |||
- Use `.Wrap()` and `.Unwrap()` instead of eg. `PubKeyS` for `go-crypto` types | |||
- RPC JSON responses use pretty printing (via `json.MarshalIndent`) | |||
- Color code different instances of the consensus for tests | |||
- Isolate viper to `cmd/tendermint/commands` and do not read config from file for tests | |||
## 0.9.2 (April 26, 2017) | |||
BUG FIXES: | |||
- Fix bug in `ResetPrivValidator` where we were using the global config and log (causing external consumers, eg. basecoin, to fail). | |||
## 0.9.1 (April 21, 2017) | |||
FEATURES: | |||
- Transaction indexing - txs are indexed by their hash using a simple key-value store; easily extended to more advanced indexers | |||
- New `/tx?hash=X` endpoint to query for transactions and their DeliverTx result by hash. Optionally returns a proof of the tx's inclusion in the block | |||
- `tendermint testnet` command initializes files for a testnet | |||
IMPROVEMENTS: | |||
- CLI now uses Cobra framework | |||
- TMROOT is now TMHOME (TMROOT will stop working in 0.10.0) | |||
- `/broadcast_tx_XXX` also returns the Hash (can be used to query for the tx) | |||
- `/broadcast_tx_commit` also returns the height the block was committed in | |||
- ABCIResponses struct persisted to disk before calling Commit; makes handshake replay much cleaner | |||
- WAL uses #ENDHEIGHT instead of #HEIGHT (#HEIGHT will stop working in 0.10.0) | |||
- Peers included via `--seeds`, under `seeds` in the config, or in `/dial_seeds` are now persistent, and will be reconnected to if the connection breaks | |||
BUG FIXES: | |||
- Fix bug in fast-sync where we stop syncing after a peer is removed, even if they're re-added later | |||
- Fix handshake replay to handle validator set changes and results of DeliverTx when we crash after app.Commit but before state.Save() | |||
## 0.9.0 (March 6, 2017) | |||
BREAKING CHANGES: | |||
- Update ABCI to v0.4.0, where Query is now `Query(RequestQuery) ResponseQuery`, enabling precise proofs at particular heights: | |||
``` | |||
message RequestQuery{ | |||
bytes data = 1; | |||
string path = 2; | |||
uint64 height = 3; | |||
bool prove = 4; | |||
} | |||
message ResponseQuery{ | |||
CodeType code = 1; | |||
int64 index = 2; | |||
bytes key = 3; | |||
bytes value = 4; | |||
bytes proof = 5; | |||
uint64 height = 6; | |||
string log = 7; | |||
} | |||
``` | |||
- `BlockMeta` data type unifies its Hash and PartSetHash under a `BlockID`: | |||
``` | |||
type BlockMeta struct { | |||
BlockID BlockID `json:"block_id"` // the block hash and partsethash | |||
Header *Header `json:"header"` // The block's Header | |||
} | |||
``` | |||
- `ValidatorSet.Proposer` is exposed as a field and persisted with the `State`. Use `GetProposer()` to initialize or update after validator-set changes. | |||
- `tendermint gen_validator` command output is now pure JSON | |||
FEATURES: | |||
- New RPC endpoint `/commit?height=X` returns header and commit for block at height `X` | |||
- Client API for each endpoint, including mocks for testing | |||
IMPROVEMENTS: | |||
- `Node` is now a `BaseService` | |||
- Simplified starting Tendermint in-process from another application | |||
- Better organized Makefile | |||
- Scripts for auto-building binaries across platforms | |||
- Docker image improved, slimmed down (using Alpine), and changed from tendermint/tmbase to tendermint/tendermint | |||
- New repo files: `CONTRIBUTING.md`, Github `ISSUE_TEMPLATE`, `CHANGELOG.md` | |||
- Improvements on CircleCI for managing build/test artifacts | |||
- Handshake replay is doen through the consensus package, possibly using a mockApp | |||
- Graceful shutdown of RPC listeners | |||
- Tests for the PEX reactor and DialSeeds | |||
BUG FIXES: | |||
- Check peer.Send for failure before updating PeerState in consensus | |||
- Fix panic in `/dial_seeds` with invalid addresses | |||
- Fix proposer selection logic in ValidatorSet by taking the address into account in the `accumComparable` | |||
- Fix inconcistencies with `ValidatorSet.Proposer` across restarts by persisting it in the `State` | |||
## 0.8.0 (January 13, 2017) | |||
BREAKING CHANGES: | |||
- New data type `BlockID` to represent blocks: | |||
``` | |||
type BlockID struct { | |||
Hash []byte `json:"hash"` | |||
PartsHeader PartSetHeader `json:"parts"` | |||
} | |||
``` | |||
- `Vote` data type now includes validator address and index: | |||
``` | |||
type Vote struct { | |||
ValidatorAddress []byte `json:"validator_address"` | |||
ValidatorIndex int `json:"validator_index"` | |||
Height int `json:"height"` | |||
Round int `json:"round"` | |||
Type byte `json:"type"` | |||
BlockID BlockID `json:"block_id"` // zero if vote is nil. | |||
Signature crypto.Signature `json:"signature"` | |||
} | |||
``` | |||
- Update TMSP to v0.3.0, where it is now called ABCI and AppendTx is DeliverTx | |||
- Hex strings in the RPC are now "0x" prefixed | |||
FEATURES: | |||
- New message type on the ConsensusReactor, `Maj23Msg`, for peers to alert others they've seen a Maj23, | |||
in order to track and handle conflicting votes intelligently to prevent Byzantine faults from causing halts: | |||
``` | |||
type VoteSetMaj23Message struct { | |||
Height int | |||
Round int | |||
Type byte | |||
BlockID types.BlockID | |||
} | |||
``` | |||
- Configurable block part set size | |||
- Validator set changes | |||
- Optionally skip TimeoutCommit if we have all the votes | |||
- Handshake between Tendermint and App on startup to sync latest state and ensure consistent recovery from crashes | |||
- GRPC server for BroadcastTx endpoint | |||
IMPROVEMENTS: | |||
- Less verbose logging | |||
- Better test coverage (37% -> 49%) | |||
- Canonical SignBytes for signable types | |||
- Write-Ahead Log for Mempool and Consensus via tmlibs/autofile | |||
- Better in-process testing for the consensus reactor and byzantine faults | |||
- Better crash/restart testing for individual nodes at preset failure points, and of networks at arbitrary points | |||
- Better abstraction over timeout mechanics | |||
BUG FIXES: | |||
- Fix memory leak in mempool peer | |||
- Fix panic on POLRound=-1 | |||
- Actually set the CommitTime | |||
- Actually send BeginBlock message | |||
- Fix a liveness issues caused by Byzantine proposals/votes. Uses the new `Maj23Msg`. | |||
## 0.7.4 (December 14, 2016) | |||
FEATURES: | |||
- Enable the Peer Exchange reactor with the `--pex` flag for more resilient gossip network (feature still in development, beware dragons) | |||
IMPROVEMENTS: | |||
- Remove restrictions on RPC endpoint `/dial_seeds` to enable manual network configuration | |||
## 0.7.3 (October 20, 2016) | |||
IMPROVEMENTS: | |||
- Type safe FireEvent | |||
- More WAL/replay tests | |||
- Cleanup some docs | |||
BUG FIXES: | |||
- Fix deadlock in mempool for synchronous apps | |||
- Replay handles non-empty blocks | |||
- Fix race condition in HeightVoteSet | |||
## 0.7.2 (September 11, 2016) | |||
BUG FIXES: | |||
- Set mustConnect=false so tendermint will retry connecting to the app | |||
## 0.7.1 (September 10, 2016) | |||
FEATURES: | |||
- New TMSP connection for Query/Info | |||
- New RPC endpoints: | |||
- `tmsp_query` | |||
- `tmsp_info` | |||
- Allow application to filter peers through Query (off by default) | |||
IMPROVEMENTS: | |||
- TMSP connection type enforced at compile time | |||
- All listen/client urls use a "tcp://" or "unix://" prefix | |||
BUG FIXES: | |||
- Save LastSignature/LastSignBytes to `priv_validator.json` for recovery | |||
- Fix event unsubscribe | |||
- Fix fastsync/blockchain reactor | |||
## 0.7.0 (August 7, 2016) | |||
BREAKING CHANGES: | |||
- Strict SemVer starting now! | |||
- Update to ABCI v0.2.0 | |||
- Validation types now called Commit | |||
- NewBlock event only returns the block header | |||
FEATURES: | |||
- TMSP and RPC support TCP and UNIX sockets | |||
- Addition config options including block size and consensus parameters | |||
- New WAL mode `cswal_light`; logs only the validator's own votes | |||
- New RPC endpoints: | |||
- for starting/stopping profilers, and for updating config | |||
- `/broadcast_tx_commit`, returns when tx is included in a block, else an error | |||
- `/unsafe_flush_mempool`, empties the mempool | |||
IMPROVEMENTS: | |||
- Various optimizations | |||
- Remove bad or invalidated transactions from the mempool cache (allows later duplicates) | |||
- More elaborate testing using CircleCI including benchmarking throughput on 4 digitalocean droplets | |||
BUG FIXES: | |||
- Various fixes to WAL and replay logic | |||
- Various race conditions | |||
## PreHistory | |||
Strict versioning only began with the release of v0.7.0, in late summer 2016. | |||
The project itself began in early summer 2014 and was workable decentralized cryptocurrency software by the end of that year. | |||
Through the course of 2015, in collaboration with Eris Industries (now Monax Indsutries), | |||
many additional features were integrated, including an implementation from scratch of the Ethereum Virtual Machine. | |||
That implementation now forms the heart of [Burrow](https://github.com/hyperledger/burrow). | |||
In the later half of 2015, the consensus algorithm was upgraded with a more asynchronous design and a more deterministic and robust implementation. | |||
By late 2015, frustration with the difficulty of forking a large monolithic stack to create alternative cryptocurrency designs led to the | |||
invention of the Application Blockchain Interface (ABCI), then called the Tendermint Socket Protocol (TMSP). | |||
The Ethereum Virtual Machine and various other transaction features were removed, and Tendermint was whittled down to a core consensus engine | |||
driving an application running in another process. | |||
The ABCI interface and implementation were iterated on and improved over the course of 2016, | |||
until versioned history kicked in with v0.7.0. |
@ -1,56 +0,0 @@ | |||
# The Tendermint Code of Conduct | |||
This code of conduct applies to all projects run by the Tendermint/COSMOS team and hence to tendermint. | |||
---- | |||
# Conduct | |||
## Contact: adrian@tendermint.com | |||
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic. | |||
* On Slack, please avoid using overtly sexual nicknames or other nicknames that might detract from a friendly, safe and welcoming environment for all. | |||
* Please be kind and courteous. There’s no need to be mean or rude. | |||
* Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer. | |||
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works. | |||
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term “harassment” as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we don’t tolerate behavior that excludes people in socially marginalized groups. | |||
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the channel admins or the person mentioned above immediately. Whether you’re a regular contributor or a newcomer, we care about making this community a safe place for you and we’ve got your back. | |||
* Likewise any spamming, trolling, flaming, baiting or other attention-stealing behaviour is not welcome. | |||
---- | |||
# Moderation | |||
These are the policies for upholding our community’s standards of conduct. If you feel that a thread needs moderation, please contact the above mentioned person. | |||
1. Remarks that violate the Tendermint/COSMOS standards of conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.) | |||
2. Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed. | |||
3. Moderators will first respond to such remarks with a warning. | |||
4. If the warning is unheeded, the user will be “kicked,” i.e., kicked out of the communication channel to cool off. | |||
5. If the user comes back and continues to make trouble, they will be banned, i.e., indefinitely excluded. | |||
6. Moderators may choose at their discretion to un-ban the user if it was a first offense and they offer the offended party a genuine apology. | |||
7. If a moderator bans someone and you think it was unjustified, please take it up with that moderator, or with a different moderator, in private. Complaints about bans in-channel are not allowed. | |||
8. Moderators are held to a higher standard than other community members. If a moderator creates an inappropriate situation, they should expect less leeway than others. | |||
In the Tendermint/COSMOS community we strive to go the extra step to look out for each other. Don’t just aim to be technically unimpeachable, try to be your best self. In particular, avoid flirting with offensive or sensitive issues, particularly if they’re off-topic; this all too often leads to unnecessary fights, hurt feelings, and damaged trust; worse, it can drive people away from the community entirely. | |||
And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you could’ve communicated better — remember that it’s your responsibility to make your fellow Cosmonauts comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust. | |||
The enforcement policies listed above apply to all official Tendermint/COSMOS venues.For other projects adopting the Tendermint/COSMOS Code of Conduct, please contact the maintainers of those projects for enforcement. If you wish to use this code of conduct for your own project, consider explicitly mentioning your moderation policy or making a copy with your own moderation policy so as to avoid confusion. | |||
*Adapted from the [Node.js Policy on Trolling](http://blog.izs.me/post/30036893703/policy-on-trolling), the [Contributor Covenant v1.3.0](http://contributor-covenant.org/version/1/3/0/) and the [Rust Code of Conduct](https://www.rust-lang.org/en-US/conduct.html). |
@ -1,117 +0,0 @@ | |||
# Contributing | |||
Thank you for considering making contributions to Tendermint and related repositories! Start by taking a look at the [coding repo](https://github.com/tendermint/coding) for overall information on repository workflow and standards. | |||
Please follow standard github best practices: fork the repo, branch from the tip of develop, make some commits, and submit a pull request to develop. See the [open issues](https://github.com/tendermint/tendermint/issues) for things we need help with! | |||
Please make sure to use `gofmt` before every commit - the easiest way to do this is have your editor run it for you upon saving a file. | |||
## Forking | |||
Please note that Go requires code to live under absolute paths, which complicates forking. | |||
While my fork lives at `https://github.com/ebuchman/tendermint`, | |||
the code should never exist at `$GOPATH/src/github.com/ebuchman/tendermint`. | |||
Instead, we use `git remote` to add the fork as a new remote for the original repo, | |||
`$GOPATH/src/github.com/tendermint/tendermint `, and do all the work there. | |||
For instance, to create a fork and work on a branch of it, I would: | |||
* Create the fork on github, using the fork button. | |||
* Go to the original repo checked out locally (ie. `$GOPATH/src/github.com/tendermint/tendermint`) | |||
* `git remote rename origin upstream` | |||
* `git remote add origin git@github.com:ebuchman/basecoin.git` | |||
Now `origin` refers to my fork and `upstream` refers to the tendermint version. | |||
So I can `git push -u origin master` to update my fork, and make pull requests to tendermint from there. | |||
Of course, replace `ebuchman` with your git handle. | |||
To pull in updates from the origin repo, run | |||
* `git fetch upstream` | |||
* `git rebase upstream/master` (or whatever branch you want) | |||
Please don't make Pull Requests to `master`. | |||
## Dependencies | |||
We use [dep](https://github.com/golang/dep) to manage dependencies. | |||
That said, the master branch of every Tendermint repository should just build | |||
with `go get`, which means they should be kept up-to-date with their | |||
dependencies so we can get away with telling people they can just `go get` our | |||
software. | |||
Since some dependencies are not under our control, a third party may break our | |||
build, in which case we can fall back on `dep ensure` (or `make | |||
get_vendor_deps`). Even for dependencies under our control, dep helps us to | |||
keep multiple repos in sync as they evolve. Anything with an executable, such | |||
as apps, tools, and the core, should use dep. | |||
Run `dep status` to get a list of vendored dependencies that may not be | |||
up-to-date. | |||
## Vagrant | |||
If you are a [Vagrant](https://www.vagrantup.com/) user, you can get started | |||
hacking Tendermint with the commands below. | |||
NOTE: In case you installed Vagrant in 2017, you might need to run | |||
`vagrant box update` to upgrade to the latest `ubuntu/xenial64`. | |||
``` | |||
vagrant up | |||
vagrant ssh | |||
make test | |||
``` | |||
## Testing | |||
All repos should be hooked up to [CircleCI](https://circleci.com/). | |||
If they have `.go` files in the root directory, they will be automatically | |||
tested by circle using `go test -v -race ./...`. If not, they will need a | |||
`circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and | |||
includes its continuous integration status using a badge in the `README.md`. | |||
## Branching Model and Release | |||
User-facing repos should adhere to the branching model: http://nvie.com/posts/a-successful-git-branching-model/. | |||
That is, these repos should be well versioned, and any merge to master requires a version bump and tagged release. | |||
Libraries need not follow the model strictly, but would be wise to, | |||
especially `go-p2p` and `go-rpc`, as their versions are referenced in tendermint core. | |||
### Development Procedure: | |||
- the latest state of development is on `develop` | |||
- `develop` must never fail `make test` | |||
- no --force onto `develop` (except when reverting a broken commit, which should seldom happen) | |||
- create a development branch either on github.com/tendermint/tendermint, or your fork (using `git add origin`) | |||
- before submitting a pull request, begin `git rebase` on top of `develop` | |||
### Pull Merge Procedure: | |||
- ensure pull branch is rebased on develop | |||
- run `make test` to ensure that all tests pass | |||
- merge pull request | |||
- the `unstable` branch may be used to aggregate pull merges before testing once | |||
- push master may request that pull requests be rebased on top of `unstable` | |||
### Release Procedure: | |||
- start on `develop` | |||
- run integration tests (see `test_integrations` in Makefile) | |||
- prepare changelog/release issue | |||
- bump versions | |||
- push to release-vX.X.X to run the extended integration tests on the CI | |||
- merge to master | |||
- merge master back to develop | |||
### Hotfix Procedure: | |||
- start on `master` | |||
- checkout a new branch named hotfix-vX.X.X | |||
- make the required changes | |||
- these changes should be small and an absolute necessity | |||
- add a note to CHANGELOG.md | |||
- bumb versions | |||
- push to hotfix-vX.X.X to run the extended integration tests on the CI | |||
- merge hotfix-vX.X.X to master | |||
- merge hotfix-vX.X.X to develop | |||
- delete the hotfix-vX.X.X branch |
@ -1 +0,0 @@ | |||
tendermint |
@ -1,39 +0,0 @@ | |||
FROM alpine:3.7 | |||
MAINTAINER Greg Szabo <greg@tendermint.com> | |||
# Tendermint will be looking for the genesis file in /tendermint/config/genesis.json | |||
# (unless you change `genesis_file` in config.toml). You can put your config.toml and | |||
# private validator file into /tendermint/config. | |||
# | |||
# The /tendermint/data dir is used by tendermint to store state. | |||
ENV TMHOME /tendermint | |||
# OS environment setup | |||
# Set user right away for determinism, create directory for persistence and give our user ownership | |||
# jq and curl used for extracting `pub_key` from private validator while | |||
# deploying tendermint with Kubernetes. It is nice to have bash so the users | |||
# could execute bash commands. | |||
RUN apk update && \ | |||
apk upgrade && \ | |||
apk --no-cache add curl jq bash && \ | |||
addgroup tmuser && \ | |||
adduser -S -G tmuser tmuser -h "$TMHOME" | |||
# Run the container with tmuser by default. (UID=100, GID=1000) | |||
USER tmuser | |||
# Expose the data directory as a volume since there's mutable state in there | |||
VOLUME [ $TMHOME ] | |||
WORKDIR $TMHOME | |||
# p2p and rpc port | |||
EXPOSE 26656 26657 | |||
ENTRYPOINT ["/usr/bin/tendermint"] | |||
CMD ["node", "--moniker=`hostname`"] | |||
STOPSIGNAL SIGTERM | |||
ARG BINARY=tendermint | |||
COPY $BINARY /usr/bin/tendermint | |||
@ -1,35 +0,0 @@ | |||
FROM alpine:3.7 | |||
ENV DATA_ROOT /tendermint | |||
ENV TMHOME $DATA_ROOT | |||
RUN addgroup tmuser && \ | |||
adduser -S -G tmuser tmuser | |||
RUN mkdir -p $DATA_ROOT && \ | |||
chown -R tmuser:tmuser $DATA_ROOT | |||
RUN apk add --no-cache bash curl jq | |||
ENV GOPATH /go | |||
ENV PATH "$PATH:/go/bin" | |||
RUN mkdir -p /go/src/github.com/tendermint/tendermint && \ | |||
apk add --no-cache go build-base git && \ | |||
cd /go/src/github.com/tendermint/tendermint && \ | |||
git clone https://github.com/tendermint/tendermint . && \ | |||
git checkout develop && \ | |||
make get_tools && \ | |||
make get_vendor_deps && \ | |||
make install && \ | |||
cd - && \ | |||
rm -rf /go/src/github.com/tendermint/tendermint && \ | |||
apk del go build-base git | |||
VOLUME $DATA_ROOT | |||
EXPOSE 26656 | |||
EXPOSE 26657 | |||
ENTRYPOINT ["tendermint"] | |||
CMD ["node", "--moniker=`hostname`", "--proxy_app=kvstore"] |
@ -1,18 +0,0 @@ | |||
FROM golang:1.10.1 | |||
# Grab deps (jq, hexdump, xxd, killall) | |||
RUN apt-get update && \ | |||
apt-get install -y --no-install-recommends \ | |||
jq bsdmainutils vim-common psmisc netcat | |||
# Add testing deps for curl | |||
RUN echo 'deb http://httpredir.debian.org/debian testing main non-free contrib' >> /etc/apt/sources.list && \ | |||
apt-get update && \ | |||
apt-get install -y --no-install-recommends curl | |||
VOLUME /go | |||
EXPOSE 26656 | |||
EXPOSE 26657 | |||
@ -1,16 +0,0 @@ | |||
build: | |||
@sh -c "'$(CURDIR)/build.sh'" | |||
push: | |||
@sh -c "'$(CURDIR)/push.sh'" | |||
build_develop: | |||
docker build -t "tendermint/tendermint:develop" -f Dockerfile.develop . | |||
build_testing: | |||
docker build --tag tendermint/testing -f ./Dockerfile.testing . | |||
push_develop: | |||
docker push "tendermint/tendermint:develop" | |||
.PHONY: build build_develop push push_develop |
@ -1,67 +0,0 @@ | |||
# Docker | |||
## Supported tags and respective `Dockerfile` links | |||
- `0.17.1`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/208ac32fa266657bd6c304e84ec828aa252bb0b8/DOCKER/Dockerfile) | |||
- `0.15.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/170777300ea92dc21a8aec1abc16cb51812513a4/DOCKER/Dockerfile) | |||
- `0.13.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/a28b3fff49dce2fb31f90abb2fc693834e0029c2/DOCKER/Dockerfile) | |||
- `0.12.1` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/457c688346b565e90735431619ca3ca597ef9007/DOCKER/Dockerfile) | |||
- `0.12.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/70d8afa6e952e24c573ece345560a5971bf2cc0e/DOCKER/Dockerfile) | |||
- `0.11.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/9177cc1f64ca88a4a0243c5d1773d10fba67e201/DOCKER/Dockerfile) | |||
- `0.10.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/e5342f4054ab784b2cd6150e14f01053d7c8deb2/DOCKER/Dockerfile) | |||
- `0.9.1`, `0.9`, [(Dockerfile)](https://github.com/tendermint/tendermint/blob/809e0e8c5933604ba8b2d096803ada7c5ec4dfd3/DOCKER/Dockerfile) | |||
- `0.9.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/d474baeeea6c22b289e7402449572f7c89ee21da/DOCKER/Dockerfile) | |||
- `0.8.0`, `0.8` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/bf64dd21fdb193e54d8addaaaa2ecf7ac371de8c/DOCKER/Dockerfile) | |||
- `develop` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/master/DOCKER/Dockerfile.develop) | |||
`develop` tag points to the [develop](https://github.com/tendermint/tendermint/tree/develop) branch. | |||
## Quick reference | |||
* **Where to get help:** | |||
https://cosmos.network/community | |||
* **Where to file issues:** | |||
https://github.com/tendermint/tendermint/issues | |||
* **Supported Docker versions:** | |||
[the latest release](https://github.com/moby/moby/releases) (down to 1.6 on a best-effort basis) | |||
## Tendermint | |||
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine, written in any programming language, and securely replicates it on many machines. | |||
For more background, see the [introduction](https://tendermint.readthedocs.io/en/master/introduction.html). | |||
To get started developing applications, see the [application developers guide](https://tendermint.readthedocs.io/en/master/getting-started.html). | |||
## How to use this image | |||
### Start one instance of the Tendermint core with the `kvstore` app | |||
A quick example of a built-in app and Tendermint core in one container. | |||
``` | |||
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint init | |||
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint node --proxy_app=kvstore | |||
``` | |||
## Local cluster | |||
To run a 4-node network, see the `Makefile` in the root of [the repo](https://github.com/tendermint/tendermint/master/Makefile) and run: | |||
``` | |||
make build-linux | |||
make build-docker-localnode | |||
make localnet-start | |||
``` | |||
Note that this will build and use a different image than the ones provided here. | |||
## License | |||
- Tendermint's license is [Apache 2.0](https://github.com/tendermint/tendermint/master/LICENSE). | |||
## Contributing | |||
Contributions are most welcome! See the [contributing file](https://github.com/tendermint/tendermint/blob/master/CONTRIBUTING.md) for more information. |
@ -1,20 +0,0 @@ | |||
#!/usr/bin/env bash | |||
set -e | |||
# Get the tag from the version, or try to figure it out. | |||
if [ -z "$TAG" ]; then | |||
TAG=$(awk -F\" '/Version =/ { print $2; exit }' < ../version/version.go) | |||
fi | |||
if [ -z "$TAG" ]; then | |||
echo "Please specify a tag." | |||
exit 1 | |||
fi | |||
TAG_NO_PATCH=${TAG%.*} | |||
read -p "==> Build 3 docker images with the following tags (latest, $TAG, $TAG_NO_PATCH)? y/n" -n 1 -r | |||
echo | |||
if [[ $REPLY =~ ^[Yy]$ ]] | |||
then | |||
docker build -t "tendermint/tendermint" -t "tendermint/tendermint:$TAG" -t "tendermint/tendermint:$TAG_NO_PATCH" . | |||
fi |
@ -1,22 +0,0 @@ | |||
#!/usr/bin/env bash | |||
set -e | |||
# Get the tag from the version, or try to figure it out. | |||
if [ -z "$TAG" ]; then | |||
TAG=$(awk -F\" '/Version =/ { print $2; exit }' < ../version/version.go) | |||
fi | |||
if [ -z "$TAG" ]; then | |||
echo "Please specify a tag." | |||
exit 1 | |||
fi | |||
TAG_NO_PATCH=${TAG%.*} | |||
read -p "==> Push 3 docker images with the following tags (latest, $TAG, $TAG_NO_PATCH)? y/n" -n 1 -r | |||
echo | |||
if [[ $REPLY =~ ^[Yy]$ ]] | |||
then | |||
docker push "tendermint/tendermint:latest" | |||
docker push "tendermint/tendermint:$TAG" | |||
docker push "tendermint/tendermint:$TAG_NO_PATCH" | |||
fi |
@ -1,431 +0,0 @@ | |||
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'. | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/beorn7/perks" | |||
packages = ["quantile"] | |||
revision = "3a771d992973f24aa725d07868b467d1ddfceafb" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/btcsuite/btcd" | |||
packages = ["btcec"] | |||
revision = "86fed781132ac890ee03e906e4ecd5d6fa180c64" | |||
[[projects]] | |||
name = "github.com/davecgh/go-spew" | |||
packages = ["spew"] | |||
revision = "346938d642f2ec3594ed81d874461961cd0faa76" | |||
version = "v1.1.0" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/ebuchman/fail-test" | |||
packages = ["."] | |||
revision = "95f809107225be108efcf10a3509e4ea6ceef3c4" | |||
[[projects]] | |||
name = "github.com/fortytw2/leaktest" | |||
packages = ["."] | |||
revision = "a5ef70473c97b71626b9abeda80ee92ba2a7de9e" | |||
version = "v1.2.0" | |||
[[projects]] | |||
name = "github.com/fsnotify/fsnotify" | |||
packages = ["."] | |||
revision = "c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9" | |||
version = "v1.4.7" | |||
[[projects]] | |||
name = "github.com/go-kit/kit" | |||
packages = [ | |||
"log", | |||
"log/level", | |||
"log/term", | |||
"metrics", | |||
"metrics/discard", | |||
"metrics/internal/lv", | |||
"metrics/prometheus" | |||
] | |||
revision = "4dc7be5d2d12881735283bcab7352178e190fc71" | |||
version = "v0.6.0" | |||
[[projects]] | |||
name = "github.com/go-logfmt/logfmt" | |||
packages = ["."] | |||
revision = "390ab7935ee28ec6b286364bba9b4dd6410cb3d5" | |||
version = "v0.3.0" | |||
[[projects]] | |||
name = "github.com/go-stack/stack" | |||
packages = ["."] | |||
revision = "259ab82a6cad3992b4e21ff5cac294ccb06474bc" | |||
version = "v1.7.0" | |||
[[projects]] | |||
name = "github.com/gogo/protobuf" | |||
packages = [ | |||
"gogoproto", | |||
"jsonpb", | |||
"proto", | |||
"protoc-gen-gogo/descriptor", | |||
"sortkeys", | |||
"types" | |||
] | |||
revision = "1adfc126b41513cc696b209667c8656ea7aac67c" | |||
version = "v1.0.0" | |||
[[projects]] | |||
name = "github.com/golang/protobuf" | |||
packages = [ | |||
"proto", | |||
"ptypes", | |||
"ptypes/any", | |||
"ptypes/duration", | |||
"ptypes/timestamp" | |||
] | |||
revision = "925541529c1fa6821df4e44ce2723319eb2be768" | |||
version = "v1.0.0" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/golang/snappy" | |||
packages = ["."] | |||
revision = "2e65f85255dbc3072edf28d6b5b8efc472979f5a" | |||
[[projects]] | |||
name = "github.com/gorilla/websocket" | |||
packages = ["."] | |||
revision = "ea4d1f681babbce9545c9c5f3d5194a789c89f5b" | |||
version = "v1.2.0" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/hashicorp/hcl" | |||
packages = [ | |||
".", | |||
"hcl/ast", | |||
"hcl/parser", | |||
"hcl/printer", | |||
"hcl/scanner", | |||
"hcl/strconv", | |||
"hcl/token", | |||
"json/parser", | |||
"json/scanner", | |||
"json/token" | |||
] | |||
revision = "ef8a98b0bbce4a65b5aa4c368430a80ddc533168" | |||
[[projects]] | |||
name = "github.com/inconshreveable/mousetrap" | |||
packages = ["."] | |||
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75" | |||
version = "v1.0" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/jmhodges/levigo" | |||
packages = ["."] | |||
revision = "c42d9e0ca023e2198120196f842701bb4c55d7b9" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/kr/logfmt" | |||
packages = ["."] | |||
revision = "b84e30acd515aadc4b783ad4ff83aff3299bdfe0" | |||
[[projects]] | |||
name = "github.com/magiconair/properties" | |||
packages = ["."] | |||
revision = "c2353362d570a7bfa228149c62842019201cfb71" | |||
version = "v1.8.0" | |||
[[projects]] | |||
name = "github.com/matttproud/golang_protobuf_extensions" | |||
packages = ["pbutil"] | |||
revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c" | |||
version = "v1.0.1" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/mitchellh/mapstructure" | |||
packages = ["."] | |||
revision = "bb74f1db0675b241733089d5a1faa5dd8b0ef57b" | |||
[[projects]] | |||
name = "github.com/pelletier/go-toml" | |||
packages = ["."] | |||
revision = "c01d1270ff3e442a8a57cddc1c92dc1138598194" | |||
version = "v1.2.0" | |||
[[projects]] | |||
name = "github.com/pkg/errors" | |||
packages = ["."] | |||
revision = "645ef00459ed84a119197bfb8d8205042c6df63d" | |||
version = "v0.8.0" | |||
[[projects]] | |||
name = "github.com/pmezard/go-difflib" | |||
packages = ["difflib"] | |||
revision = "792786c7400a136282c1664665ae0a8db921c6c2" | |||
version = "v1.0.0" | |||
[[projects]] | |||
name = "github.com/prometheus/client_golang" | |||
packages = [ | |||
"prometheus", | |||
"prometheus/promhttp" | |||
] | |||
revision = "c5b7fccd204277076155f10851dad72b76a49317" | |||
version = "v0.8.0" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/prometheus/client_model" | |||
packages = ["go"] | |||
revision = "99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/prometheus/common" | |||
packages = [ | |||
"expfmt", | |||
"internal/bitbucket.org/ww/goautoneg", | |||
"model" | |||
] | |||
revision = "7600349dcfe1abd18d72d3a1770870d9800a7801" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/prometheus/procfs" | |||
packages = [ | |||
".", | |||
"internal/util", | |||
"nfs", | |||
"xfs" | |||
] | |||
revision = "94663424ae5ae9856b40a9f170762b4197024661" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/rcrowley/go-metrics" | |||
packages = ["."] | |||
revision = "e2704e165165ec55d062f5919b4b29494e9fa790" | |||
[[projects]] | |||
name = "github.com/spf13/afero" | |||
packages = [ | |||
".", | |||
"mem" | |||
] | |||
revision = "787d034dfe70e44075ccc060d346146ef53270ad" | |||
version = "v1.1.1" | |||
[[projects]] | |||
name = "github.com/spf13/cast" | |||
packages = ["."] | |||
revision = "8965335b8c7107321228e3e3702cab9832751bac" | |||
version = "v1.2.0" | |||
[[projects]] | |||
name = "github.com/spf13/cobra" | |||
packages = ["."] | |||
revision = "ef82de70bb3f60c65fb8eebacbb2d122ef517385" | |||
version = "v0.0.3" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/spf13/jwalterweatherman" | |||
packages = ["."] | |||
revision = "7c0cea34c8ece3fbeb2b27ab9b59511d360fb394" | |||
[[projects]] | |||
name = "github.com/spf13/pflag" | |||
packages = ["."] | |||
revision = "583c0c0531f06d5278b7d917446061adc344b5cd" | |||
version = "v1.0.1" | |||
[[projects]] | |||
name = "github.com/spf13/viper" | |||
packages = ["."] | |||
revision = "b5e8006cbee93ec955a89ab31e0e3ce3204f3736" | |||
version = "v1.0.2" | |||
[[projects]] | |||
name = "github.com/stretchr/testify" | |||
packages = [ | |||
"assert", | |||
"require" | |||
] | |||
revision = "f35b8ab0b5a2cef36673838d662e249dd9c94686" | |||
version = "v1.2.2" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/syndtr/goleveldb" | |||
packages = [ | |||
"leveldb", | |||
"leveldb/cache", | |||
"leveldb/comparer", | |||
"leveldb/errors", | |||
"leveldb/filter", | |||
"leveldb/iterator", | |||
"leveldb/journal", | |||
"leveldb/memdb", | |||
"leveldb/opt", | |||
"leveldb/storage", | |||
"leveldb/table", | |||
"leveldb/util" | |||
] | |||
revision = "e2150783cd35f5b607daca48afd8c57ec54cc995" | |||
[[projects]] | |||
name = "github.com/tendermint/abci" | |||
packages = [ | |||
"client", | |||
"example/code", | |||
"example/counter", | |||
"example/kvstore", | |||
"server", | |||
"types" | |||
] | |||
revision = "198dccf0ddfd1bb176f87657e3286a05a6ed9540" | |||
version = "v0.12.0" | |||
[[projects]] | |||
branch = "master" | |||
name = "github.com/tendermint/ed25519" | |||
packages = [ | |||
".", | |||
"edwards25519", | |||
"extra25519" | |||
] | |||
revision = "d8387025d2b9d158cf4efb07e7ebf814bcce2057" | |||
[[projects]] | |||
name = "github.com/tendermint/go-amino" | |||
packages = ["."] | |||
revision = "ed62928576cfcaf887209dc96142cd79cdfff389" | |||
version = "0.9.9" | |||
[[projects]] | |||
name = "github.com/tendermint/go-crypto" | |||
packages = ["."] | |||
revision = "915416979bf70efa4bcbf1c6cd5d64c5fff9fc19" | |||
version = "v0.6.2" | |||
[[projects]] | |||
name = "github.com/tendermint/tmlibs" | |||
packages = [ | |||
"autofile", | |||
"cli", | |||
"cli/flags", | |||
"clist", | |||
"common", | |||
"db", | |||
"flowrate", | |||
"log", | |||
"merkle", | |||
"test" | |||
] | |||
revision = "692f1d86a6e2c0efa698fd1e4541b68c74ffaf38" | |||
version = "v0.8.4" | |||
[[projects]] | |||
branch = "master" | |||
name = "golang.org/x/crypto" | |||
packages = [ | |||
"curve25519", | |||
"nacl/box", | |||
"nacl/secretbox", | |||
"openpgp/armor", | |||
"openpgp/errors", | |||
"poly1305", | |||
"ripemd160", | |||
"salsa20/salsa" | |||
] | |||
revision = "8ac0e0d97ce45cd83d1d7243c060cb8461dda5e9" | |||
[[projects]] | |||
branch = "master" | |||
name = "golang.org/x/net" | |||
packages = [ | |||
"context", | |||
"http/httpguts", | |||
"http2", | |||
"http2/hpack", | |||
"idna", | |||
"internal/timeseries", | |||
"trace" | |||
] | |||
revision = "db08ff08e8622530d9ed3a0e8ac279f6d4c02196" | |||
[[projects]] | |||
branch = "master" | |||
name = "golang.org/x/sys" | |||
packages = ["unix"] | |||
revision = "a9e25c09b96b8870693763211309e213c6ef299d" | |||
[[projects]] | |||
name = "golang.org/x/text" | |||
packages = [ | |||
"collate", | |||
"collate/build", | |||
"internal/colltab", | |||
"internal/gen", | |||
"internal/tag", | |||
"internal/triegen", | |||
"internal/ucd", | |||
"language", | |||
"secure/bidirule", | |||
"transform", | |||
"unicode/bidi", | |||
"unicode/cldr", | |||
"unicode/norm", | |||
"unicode/rangetable" | |||
] | |||
revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0" | |||
version = "v0.3.0" | |||
[[projects]] | |||
name = "google.golang.org/genproto" | |||
packages = ["googleapis/rpc/status"] | |||
revision = "7fd901a49ba6a7f87732eb344f6e3c5b19d1b200" | |||
[[projects]] | |||
name = "google.golang.org/grpc" | |||
packages = [ | |||
".", | |||
"balancer", | |||
"codes", | |||
"connectivity", | |||
"credentials", | |||
"grpclb/grpc_lb_v1/messages", | |||
"grpclog", | |||
"internal", | |||
"keepalive", | |||
"metadata", | |||
"naming", | |||
"peer", | |||
"resolver", | |||
"stats", | |||
"status", | |||
"tap", | |||
"transport" | |||
] | |||
revision = "5b3c4e850e90a4cf6a20ebd46c8b32a0a3afcb9e" | |||
version = "v1.7.5" | |||
[[projects]] | |||
name = "gopkg.in/yaml.v2" | |||
packages = ["."] | |||
revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183" | |||
version = "v2.2.1" | |||
[solve-meta] | |||
analyzer-name = "dep" | |||
analyzer-version = 1 | |||
inputs-digest = "3bd388e520a08cd0aa14df2d6f5ecb46449d7c36fd80cf52eb775798e6accbaa" | |||
solver-name = "gps-cdcl" | |||
solver-version = 1 |
@ -1,103 +0,0 @@ | |||
# Gopkg.toml example | |||
# | |||
# Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md | |||
# for detailed Gopkg.toml documentation. | |||
# | |||
# required = ["github.com/user/thing/cmd/thing"] | |||
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"] | |||
# | |||
# [[constraint]] | |||
# name = "github.com/user/project" | |||
# version = "1.0.0" | |||
# | |||
# [[constraint]] | |||
# name = "github.com/user/project2" | |||
# branch = "dev" | |||
# source = "github.com/myfork/project2" | |||
# | |||
# [[override]] | |||
# name = "github.com/x/y" | |||
# version = "2.4.0" | |||
# | |||
# [prune] | |||
# non-go = false | |||
# go-tests = true | |||
# unused-packages = true | |||
[[constraint]] | |||
name = "github.com/ebuchman/fail-test" | |||
branch = "master" | |||
[[constraint]] | |||
name = "github.com/fortytw2/leaktest" | |||
branch = "master" | |||
[[constraint]] | |||
name = "github.com/go-kit/kit" | |||
version = "~0.6.0" | |||
[[constraint]] | |||
name = "github.com/gogo/protobuf" | |||
version = "~1.0.0" | |||
[[constraint]] | |||
name = "github.com/golang/protobuf" | |||
version = "~1.0.0" | |||
[[constraint]] | |||
name = "github.com/gorilla/websocket" | |||
version = "~1.2.0" | |||
[[constraint]] | |||
name = "github.com/pkg/errors" | |||
version = "~0.8.0" | |||
[[constraint]] | |||
name = "github.com/rcrowley/go-metrics" | |||
branch = "master" | |||
[[constraint]] | |||
name = "github.com/spf13/cobra" | |||
version = "~0.0.1" | |||
[[constraint]] | |||
name = "github.com/spf13/viper" | |||
version = "~1.0.0" | |||
[[constraint]] | |||
name = "github.com/stretchr/testify" | |||
version = "~1.2.1" | |||
[[constraint]] | |||
name = "github.com/tendermint/abci" | |||
version = "~0.12.0" | |||
[[constraint]] | |||
name = "github.com/tendermint/go-crypto" | |||
version = "~0.6.2" | |||
[[constraint]] | |||
name = "github.com/tendermint/go-amino" | |||
version = "=0.9.9" | |||
[[override]] | |||
name = "github.com/tendermint/tmlibs" | |||
version = "~0.8.4" | |||
[[constraint]] | |||
name = "google.golang.org/grpc" | |||
version = "~1.7.3" | |||
# this got updated and broke, so locked to an old working commit ... | |||
[[override]] | |||
name = "google.golang.org/genproto" | |||
revision = "7fd901a49ba6a7f87732eb344f6e3c5b19d1b200" | |||
[prune] | |||
go-tests = true | |||
unused-packages = true | |||
[[constraint]] | |||
name = "github.com/prometheus/client_golang" | |||
version = "0.8.0" |
@ -1,204 +0,0 @@ | |||
Tendermint Core | |||
License: Apache2.0 | |||
Apache License | |||
Version 2.0, January 2004 | |||
http://www.apache.org/licenses/ | |||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION | |||
1. Definitions. | |||
"License" shall mean the terms and conditions for use, reproduction, | |||
and distribution as defined by Sections 1 through 9 of this document. | |||
"Licensor" shall mean the copyright owner or entity authorized by | |||
the copyright owner that is granting the License. | |||
"Legal Entity" shall mean the union of the acting entity and all | |||
other entities that control, are controlled by, or are under common | |||
control with that entity. For the purposes of this definition, | |||
"control" means (i) the power, direct or indirect, to cause the | |||
direction or management of such entity, whether by contract or | |||
otherwise, or (ii) ownership of fifty percent (50%) or more of the | |||
outstanding shares, or (iii) beneficial ownership of such entity. | |||
"You" (or "Your") shall mean an individual or Legal Entity | |||
exercising permissions granted by this License. | |||
"Source" form shall mean the preferred form for making modifications, | |||
including but not limited to software source code, documentation | |||
source, and configuration files. | |||
"Object" form shall mean any form resulting from mechanical | |||
transformation or translation of a Source form, including but | |||
not limited to compiled object code, generated documentation, | |||
and conversions to other media types. | |||
"Work" shall mean the work of authorship, whether in Source or | |||
Object form, made available under the License, as indicated by a | |||
copyright notice that is included in or attached to the work | |||
(an example is provided in the Appendix below). | |||
"Derivative Works" shall mean any work, whether in Source or Object | |||
form, that is based on (or derived from) the Work and for which the | |||
editorial revisions, annotations, elaborations, or other modifications | |||
represent, as a whole, an original work of authorship. For the purposes | |||
of this License, Derivative Works shall not include works that remain | |||
separable from, or merely link (or bind by name) to the interfaces of, | |||
the Work and Derivative Works thereof. | |||
"Contribution" shall mean any work of authorship, including | |||
the original version of the Work and any modifications or additions | |||
to that Work or Derivative Works thereof, that is intentionally | |||
submitted to Licensor for inclusion in the Work by the copyright owner | |||
or by an individual or Legal Entity authorized to submit on behalf of | |||
the copyright owner. For the purposes of this definition, "submitted" | |||
means any form of electronic, verbal, or written communication sent | |||
to the Licensor or its representatives, including but not limited to | |||
communication on electronic mailing lists, source code control systems, | |||
and issue tracking systems that are managed by, or on behalf of, the | |||
Licensor for the purpose of discussing and improving the Work, but | |||
excluding communication that is conspicuously marked or otherwise | |||
designated in writing by the copyright owner as "Not a Contribution." | |||
"Contributor" shall mean Licensor and any individual or Legal Entity | |||
on behalf of whom a Contribution has been received by Licensor and | |||
subsequently incorporated within the Work. | |||
2. Grant of Copyright License. Subject to the terms and conditions of | |||
this License, each Contributor hereby grants to You a perpetual, | |||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable | |||
copyright license to reproduce, prepare Derivative Works of, | |||
publicly display, publicly perform, sublicense, and distribute the | |||
Work and such Derivative Works in Source or Object form. | |||
3. Grant of Patent License. Subject to the terms and conditions of | |||
this License, each Contributor hereby grants to You a perpetual, | |||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable | |||
(except as stated in this section) patent license to make, have made, | |||
use, offer to sell, sell, import, and otherwise transfer the Work, | |||
where such license applies only to those patent claims licensable | |||
by such Contributor that are necessarily infringed by their | |||
Contribution(s) alone or by combination of their Contribution(s) | |||
with the Work to which such Contribution(s) was submitted. If You | |||
institute patent litigation against any entity (including a | |||
cross-claim or counterclaim in a lawsuit) alleging that the Work | |||
or a Contribution incorporated within the Work constitutes direct | |||
or contributory patent infringement, then any patent licenses | |||
granted to You under this License for that Work shall terminate | |||
as of the date such litigation is filed. | |||
4. Redistribution. You may reproduce and distribute copies of the | |||
Work or Derivative Works thereof in any medium, with or without | |||
modifications, and in Source or Object form, provided that You | |||
meet the following conditions: | |||
(a) You must give any other recipients of the Work or | |||
Derivative Works a copy of this License; and | |||
(b) You must cause any modified files to carry prominent notices | |||
stating that You changed the files; and | |||
(c) You must retain, in the Source form of any Derivative Works | |||
that You distribute, all copyright, patent, trademark, and | |||
attribution notices from the Source form of the Work, | |||
excluding those notices that do not pertain to any part of | |||
the Derivative Works; and | |||
(d) If the Work includes a "NOTICE" text file as part of its | |||
distribution, then any Derivative Works that You distribute must | |||
include a readable copy of the attribution notices contained | |||
within such NOTICE file, excluding those notices that do not | |||
pertain to any part of the Derivative Works, in at least one | |||
of the following places: within a NOTICE text file distributed | |||
as part of the Derivative Works; within the Source form or | |||
documentation, if provided along with the Derivative Works; or, | |||
within a display generated by the Derivative Works, if and | |||
wherever such third-party notices normally appear. The contents | |||
of the NOTICE file are for informational purposes only and | |||
do not modify the License. You may add Your own attribution | |||
notices within Derivative Works that You distribute, alongside | |||
or as an addendum to the NOTICE text from the Work, provided | |||
that such additional attribution notices cannot be construed | |||
as modifying the License. | |||
You may add Your own copyright statement to Your modifications and | |||
may provide additional or different license terms and conditions | |||
for use, reproduction, or distribution of Your modifications, or | |||
for any such Derivative Works as a whole, provided Your use, | |||
reproduction, and distribution of the Work otherwise complies with | |||
the conditions stated in this License. | |||
5. Submission of Contributions. Unless You explicitly state otherwise, | |||
any Contribution intentionally submitted for inclusion in the Work | |||
by You to the Licensor shall be under the terms and conditions of | |||
this License, without any additional terms or conditions. | |||
Notwithstanding the above, nothing herein shall supersede or modify | |||
the terms of any separate license agreement you may have executed | |||
with Licensor regarding such Contributions. | |||
6. Trademarks. This License does not grant permission to use the trade | |||
names, trademarks, service marks, or product names of the Licensor, | |||
except as required for reasonable and customary use in describing the | |||
origin of the Work and reproducing the content of the NOTICE file. | |||
7. Disclaimer of Warranty. Unless required by applicable law or | |||
agreed to in writing, Licensor provides the Work (and each | |||
Contributor provides its Contributions) on an "AS IS" BASIS, | |||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or | |||
implied, including, without limitation, any warranties or conditions | |||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A | |||
PARTICULAR PURPOSE. You are solely responsible for determining the | |||
appropriateness of using or redistributing the Work and assume any | |||
risks associated with Your exercise of permissions under this License. | |||
8. Limitation of Liability. In no event and under no legal theory, | |||
whether in tort (including negligence), contract, or otherwise, | |||
unless required by applicable law (such as deliberate and grossly | |||
negligent acts) or agreed to in writing, shall any Contributor be | |||
liable to You for damages, including any direct, indirect, special, | |||
incidental, or consequential damages of any character arising as a | |||
result of this License or out of the use or inability to use the | |||
Work (including but not limited to damages for loss of goodwill, | |||
work stoppage, computer failure or malfunction, or any and all | |||
other commercial damages or losses), even if such Contributor | |||
has been advised of the possibility of such damages. | |||
9. Accepting Warranty or Additional Liability. While redistributing | |||
the Work or Derivative Works thereof, You may choose to offer, | |||
and charge a fee for, acceptance of support, warranty, indemnity, | |||
or other liability obligations and/or rights consistent with this | |||
License. However, in accepting such obligations, You may act only | |||
on Your own behalf and on Your sole responsibility, not on behalf | |||
of any other Contributor, and only if You agree to indemnify, | |||
defend, and hold each Contributor harmless for any liability | |||
incurred by, or claims asserted against, such Contributor by reason | |||
of your accepting any such warranty or additional liability. | |||
END OF TERMS AND CONDITIONS | |||
APPENDIX: How to apply the Apache License to your work. | |||
To apply the Apache License to your work, attach the following | |||
boilerplate notice, with the fields enclosed by brackets "{}" | |||
replaced with your own identifying information. (Don't include | |||
the brackets!) The text should be enclosed in the appropriate | |||
comment syntax for the file format. We also recommend that a | |||
file or class name and description of purpose be included on the | |||
same "printed page" as the copyright notice for easier | |||
identification within third-party archives. | |||
Copyright 2016 All in Bits, Inc | |||
Licensed under the Apache License, Version 2.0 (the "License"); | |||
you may not use this file except in compliance with the License. | |||
You may obtain a copy of the License at | |||
http://www.apache.org/licenses/LICENSE-2.0 | |||
Unless required by applicable law or agreed to in writing, software | |||
distributed under the License is distributed on an "AS IS" BASIS, | |||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |||
See the License for the specific language governing permissions and | |||
limitations under the License. |
@ -1,236 +0,0 @@ | |||
GOTOOLS = \ | |||
github.com/golang/dep/cmd/dep \ | |||
gopkg.in/alecthomas/gometalinter.v2 | |||
PACKAGES=$(shell go list ./... | grep -v '/vendor/') | |||
BUILD_TAGS?=tendermint | |||
BUILD_FLAGS = -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`" | |||
all: check build test install | |||
check: check_tools ensure_deps | |||
######################################## | |||
### Build | |||
build: | |||
CGO_ENABLED=0 go build $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint/ | |||
build_race: | |||
CGO_ENABLED=0 go build -race $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint | |||
install: | |||
CGO_ENABLED=0 go install $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' ./cmd/tendermint | |||
######################################## | |||
### Distribution | |||
# dist builds binaries for all platforms and packages them for distribution | |||
dist: | |||
@BUILD_TAGS='$(BUILD_TAGS)' sh -c "'$(CURDIR)/scripts/dist.sh'" | |||
######################################## | |||
### Tools & dependencies | |||
check_tools: | |||
@# https://stackoverflow.com/a/25668869 | |||
@echo "Found tools: $(foreach tool,$(notdir $(GOTOOLS)),\ | |||
$(if $(shell which $(tool)),$(tool),$(error "No $(tool) in PATH")))" | |||
get_tools: | |||
@echo "--> Installing tools" | |||
go get -u -v $(GOTOOLS) | |||
@gometalinter.v2 --install | |||
update_tools: | |||
@echo "--> Updating tools" | |||
@go get -u $(GOTOOLS) | |||
#Run this from CI | |||
get_vendor_deps: | |||
@rm -rf vendor/ | |||
@echo "--> Running dep" | |||
@dep ensure -vendor-only | |||
#Run this locally. | |||
ensure_deps: | |||
@rm -rf vendor/ | |||
@echo "--> Running dep" | |||
@dep ensure | |||
draw_deps: | |||
@# requires brew install graphviz or apt-get install graphviz | |||
go get github.com/RobotsAndPencils/goviz | |||
@goviz -i github.com/tendermint/tendermint/cmd/tendermint -d 3 | dot -Tpng -o dependency-graph.png | |||
get_deps_bin_size: | |||
@# Copy of build recipe with additional flags to perform binary size analysis | |||
$(eval $(shell go build -work -a $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o build/tendermint ./cmd/tendermint/ 2>&1)) | |||
@find $(WORK) -type f -name "*.a" | xargs -I{} du -hxs "{}" | sort -rh | sed -e s:${WORK}/::g > deps_bin_size.log | |||
@echo "Results can be found here: $(CURDIR)/deps_bin_size.log" | |||
######################################## | |||
### Testing | |||
## required to be run first by most tests | |||
build_docker_test_image: | |||
docker build -t tester -f ./test/docker/Dockerfile . | |||
### coverage, app, persistence, and libs tests | |||
test_cover: | |||
# run the go unit tests with coverage | |||
bash test/test_cover.sh | |||
test_apps: | |||
# run the app tests using bash | |||
# requires `abci-cli` and `tendermint` binaries installed | |||
bash test/app/test.sh | |||
test_persistence: | |||
# run the persistence tests using bash | |||
# requires `abci-cli` installed | |||
docker run --name run_persistence -t tester bash test/persist/test_failure_indices.sh | |||
# TODO undockerize | |||
# bash test/persist/test_failure_indices.sh | |||
test_p2p: | |||
docker rm -f rsyslog || true | |||
rm -rf test/logs || true | |||
mkdir test/logs | |||
cd test/ | |||
docker run -d -v "logs:/var/log/" -p 127.0.0.1:5514:514/udp --name rsyslog voxxit/rsyslog | |||
cd .. | |||
# requires 'tester' the image from above | |||
bash test/p2p/test.sh tester | |||
need_abci: | |||
bash scripts/install_abci_apps.sh | |||
test_integrations: | |||
make build_docker_test_image | |||
make get_tools | |||
make get_vendor_deps | |||
make install | |||
make need_abci | |||
make test_cover | |||
make test_apps | |||
make test_persistence | |||
make test_p2p | |||
test_release: | |||
@go test -tags release $(PACKAGES) | |||
test100: | |||
@for i in {1..100}; do make test; done | |||
vagrant_test: | |||
vagrant up | |||
vagrant ssh -c 'make test_integrations' | |||
### go tests | |||
test: | |||
@echo "--> Running go test" | |||
@go test $(PACKAGES) | |||
test_race: | |||
@echo "--> Running go test --race" | |||
@go test -v -race $(PACKAGES) | |||
######################################## | |||
### Formatting, linting, and vetting | |||
fmt: | |||
@go fmt ./... | |||
metalinter: | |||
@echo "--> Running linter" | |||
@gometalinter.v2 --vendor --deadline=600s --disable-all \ | |||
--enable=deadcode \ | |||
--enable=gosimple \ | |||
--enable=misspell \ | |||
--enable=safesql \ | |||
./... | |||
#--enable=gas \ | |||
#--enable=maligned \ | |||
#--enable=dupl \ | |||
#--enable=errcheck \ | |||
#--enable=goconst \ | |||
#--enable=gocyclo \ | |||
#--enable=goimports \ | |||
#--enable=golint \ <== comments on anything exported | |||
#--enable=gotype \ | |||
#--enable=ineffassign \ | |||
#--enable=interfacer \ | |||
#--enable=megacheck \ | |||
#--enable=staticcheck \ | |||
#--enable=structcheck \ | |||
#--enable=unconvert \ | |||
#--enable=unparam \ | |||
#--enable=unused \ | |||
#--enable=varcheck \ | |||
#--enable=vet \ | |||
#--enable=vetshadow \ | |||
metalinter_all: | |||
@echo "--> Running linter (all)" | |||
gometalinter.v2 --vendor --deadline=600s --enable-all --disable=lll ./... | |||
########################################################### | |||
### Docker image | |||
build-docker: | |||
cp build/tendermint DOCKER/tendermint | |||
docker build --label=tendermint --tag="tendermint/tendermint" DOCKER | |||
rm -rf DOCKER/tendermint | |||
########################################################### | |||
### Local testnet using docker | |||
# Build linux binary on other platforms | |||
build-linux: | |||
GOOS=linux GOARCH=amd64 $(MAKE) build | |||
build-docker-localnode: | |||
cd networks/local | |||
make | |||
# Run a 4-node testnet locally | |||
localnet-start: localnet-stop | |||
@if ! [ -f build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/tendermint:Z tendermint/localnode testnet --v 4 --o . --populate-persistent-peers --starting-ip-address 192.167.10.2 ; fi | |||
docker-compose up | |||
# Stop testnet | |||
localnet-stop: | |||
docker-compose down | |||
########################################################### | |||
### Remote full-nodes (sentry) using terraform and ansible | |||
# Server management | |||
sentry-start: | |||
@if [ -z "$(DO_API_TOKEN)" ]; then echo "DO_API_TOKEN environment variable not set." ; false ; fi | |||
@if ! [ -f $(HOME)/.ssh/id_rsa.pub ]; then ssh-keygen ; fi | |||
cd networks/remote/terraform && terraform init && terraform apply -var DO_API_TOKEN="$(DO_API_TOKEN)" -var SSH_KEY_FILE="$(HOME)/.ssh/id_rsa.pub" | |||
@if ! [ -f $(CURDIR)/build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/tendermint:Z tendermint/localnode testnet --v 0 --n 4 --o . ; fi | |||
cd networks/remote/ansible && ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i inventory/digital_ocean.py -l sentrynet install.yml | |||
@echo "Next step: Add your validator setup in the genesis.json and config.tml files and run \"make sentry-config\". (Public key of validator, chain ID, peer IP and node ID.)" | |||
# Configuration management | |||
sentry-config: | |||
cd networks/remote/ansible && ansible-playbook -i inventory/digital_ocean.py -l sentrynet config.yml -e BINARY=$(CURDIR)/build/tendermint -e CONFIGDIR=$(CURDIR)/build | |||
sentry-stop: | |||
@if [ -z "$(DO_API_TOKEN)" ]; then echo "DO_API_TOKEN environment variable not set." ; false ; fi | |||
cd networks/remote/terraform && terraform destroy -var DO_API_TOKEN="$(DO_API_TOKEN)" -var SSH_KEY_FILE="$(HOME)/.ssh/id_rsa.pub" | |||
# meant for the CI, inspect script & adapt accordingly | |||
build-slate: | |||
bash scripts/slate.sh | |||
# To avoid unintended conflicts with file names, always add to .PHONY | |||
# unless there is a reason not to. | |||
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html | |||
.PHONY: check build build_race dist install check_tools get_tools update_tools get_vendor_deps draw_deps test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate |
@ -1,138 +0,0 @@ | |||
# Tendermint | |||
[Byzantine-Fault Tolerant](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance) | |||
[State Machine Replication](https://en.wikipedia.org/wiki/State_machine_replication). | |||
Or [Blockchain](https://en.wikipedia.org/wiki/Blockchain_(database)) for short. | |||
[![version](https://img.shields.io/github/tag/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/releases/latest) | |||
[![API Reference]( | |||
https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667 | |||
)](https://godoc.org/github.com/tendermint/tendermint) | |||
[![Go version](https://img.shields.io/badge/go-1.9.2-blue.svg)](https://github.com/moovweb/gvm) | |||
[![riot.im](https://img.shields.io/badge/riot.im-JOIN%20CHAT-green.svg)](https://riot.im/app/#/room/#tendermint:matrix.org) | |||
[![license](https://img.shields.io/github/license/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/blob/master/LICENSE) | |||
[![](https://tokei.rs/b1/github/tendermint/tendermint?category=lines)](https://github.com/tendermint/tendermint) | |||
Branch | Tests | Coverage | |||
----------|-------|---------- | |||
master | [![CircleCI](https://circleci.com/gh/tendermint/tendermint/tree/master.svg?style=shield)](https://circleci.com/gh/tendermint/tendermint/tree/master) | [![codecov](https://codecov.io/gh/tendermint/tendermint/branch/master/graph/badge.svg)](https://codecov.io/gh/tendermint/tendermint) | |||
develop | [![CircleCI](https://circleci.com/gh/tendermint/tendermint/tree/develop.svg?style=shield)](https://circleci.com/gh/tendermint/tendermint/tree/develop) | [![codecov](https://codecov.io/gh/tendermint/tendermint/branch/develop/graph/badge.svg)](https://codecov.io/gh/tendermint/tendermint) | |||
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language - | |||
and securely replicates it on many machines. | |||
For protocol details, see [the specification](/docs/spec). | |||
## A Note on Production Readiness | |||
While Tendermint is being used in production in private, permissioned | |||
environments, we are still working actively to harden and audit it in preparation | |||
for use in public blockchains, such as the [Cosmos Network](https://cosmos.network/). | |||
We are also still making breaking changes to the protocol and the APIs. | |||
Thus we tag the releases as *alpha software*. | |||
In any case, if you intend to run Tendermint in production, | |||
please [contact us](https://riot.im/app/#/room/#tendermint:matrix.org) :) | |||
## Security | |||
To report a security vulnerability, see our [bug bounty | |||
program](https://tendermint.com/security). | |||
For examples of the kinds of bugs we're looking for, see [SECURITY.md](SECURITY.md) | |||
## Minimum requirements | |||
Requirement|Notes | |||
---|--- | |||
Go version | Go1.9 or higher | |||
## Install | |||
See the [install instructions](/docs/install.rst) | |||
## Quick Start | |||
- [Single node](/docs/using-tendermint.rst) | |||
- [Local cluster using docker-compose](/networks/local) | |||
- [Remote cluster using terraform and ansible](/docs/terraform-and-ansible.rst) | |||
- [Join the public testnet](https://cosmos.network/testnet) | |||
## Resources | |||
### Tendermint Core | |||
For details about the blockchain data structures and the p2p protocols, see the | |||
the [Tendermint specification](/docs/spec). | |||
For details on using the software, [Read The Docs](https://tendermint.readthedocs.io/en/master/). | |||
Additional information about some - and eventually all - of the sub-projects below, can be found at Read The Docs. | |||
### Sub-projects | |||
* [ABCI](http://github.com/tendermint/abci), the Application Blockchain Interface | |||
* [Go-Wire](http://github.com/tendermint/go-wire), a deterministic serialization library | |||
* [Go-Crypto](http://github.com/tendermint/go-crypto), an elliptic curve cryptography library | |||
* [TmLibs](http://github.com/tendermint/tmlibs), an assortment of Go libraries used internally | |||
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation | |||
### Tools | |||
* [Deployment, Benchmarking, and Monitoring](http://tendermint.readthedocs.io/projects/tools/en/develop/index.html#tendermint-tools) | |||
### Applications | |||
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework | |||
* [Ethermint](http://github.com/tendermint/ethermint); Ethereum on Tendermint | |||
* [Many more](https://tendermint.readthedocs.io/en/master/ecosystem.html#abci-applications) | |||
### More | |||
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769) | |||
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf) | |||
* [Tendermint Blog](https://blog.cosmos.network/tendermint/home) | |||
* [Cosmos Blog](https://blog.cosmos.network) | |||
## Contributing | |||
Yay open source! Please see our [contributing guidelines](CONTRIBUTING.md). | |||
## Versioning | |||
### SemVer | |||
Tendermint uses [SemVer](http://semver.org/) to determine when and how the version changes. | |||
According to SemVer, anything in the public API can change at any time before version 1.0.0 | |||
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used | |||
to signal breaking changes across a subset of the total public API. This subset includes all | |||
interfaces exposed to other processes (cli, rpc, p2p, etc.), but does not | |||
include the in-process Go APIs. | |||
That said, breaking changes in the following packages will be documented in the | |||
CHANGELOG even if they don't lead to MINOR version bumps: | |||
- types | |||
- rpc/client | |||
- config | |||
- node | |||
Exported objects in these packages that are not covered by the versioning scheme | |||
are explicitly marked by `// UNSTABLE` in their go doc comment and may change at any time. | |||
Functions, types, and values in any other package may also change at any time. | |||
### Upgrades | |||
In an effort to avoid accumulating technical debt prior to 1.0.0, | |||
we do not guarantee that breaking changes (ie. bumps in the MINOR version) | |||
will work with existing tendermint blockchains. In these cases you will | |||
have to start a new blockchain, or write something custom to get the old | |||
data into the new chain. | |||
However, any bump in the PATCH version should be compatible with existing histories | |||
(if not please open an [issue](https://github.com/tendermint/tendermint/issues)). | |||
## Code of Conduct | |||
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md). |
@ -1,23 +0,0 @@ | |||
# Roadmap | |||
BREAKING CHANGES: | |||
- Better support for injecting randomness | |||
- Upgrade consensus for more real-time use of evidence | |||
FEATURES: | |||
- Use the chain as its own CA for nodes and validators | |||
- Tooling to run multiple blockchains/apps, possibly in a single process | |||
- State syncing (without transaction replay) | |||
- Add authentication and rate-limitting to the RPC | |||
IMPROVEMENTS: | |||
- Improve subtleties around mempool caching and logic | |||
- Consensus optimizations: | |||
- cache block parts for faster agreement after round changes | |||
- propagate block parts rarest first | |||
- Better testing of the consensus state machine (ie. use a DSL) | |||
- Auto compiled serialization/deserialization code instead of go-wire reflection | |||
BUG FIXES: | |||
- Graceful handling/recovery for apps that have non-determinism or fail to halt | |||
- Graceful handling/recovery for violations of safety, or liveness |
@ -1,71 +0,0 @@ | |||
# Security | |||
As part of our [Coordinated Vulnerability Disclosure | |||
Policy](https://tendermint.com/security), we operate a bug bounty. | |||
See the policy for more details on submissions and rewards. | |||
Here is a list of examples of the kinds of bugs we're most interested in: | |||
## Specification | |||
- Conceptual flaws | |||
- Ambiguities, inconsistencies, or incorrect statements | |||
- Mis-match between specification and implementation of any component | |||
## Consensus | |||
Assuming less than 1/3 of the voting power is Byzantine (malicious): | |||
- Validation of blockchain data structures, including blocks, block parts, | |||
votes, and so on | |||
- Execution of blocks | |||
- Validator set changes | |||
- Proposer round robin | |||
- Two nodes committing conflicting blocks for the same height (safety failure) | |||
- A correct node signing conflicting votes | |||
- A node halting (liveness failure) | |||
- Syncing new and old nodes | |||
## Networking | |||
- Authenticated encryption (MITM, information leakage) | |||
- Eclipse attacks | |||
- Sybil attacks | |||
- Long-range attacks | |||
- Denial-of-Service | |||
## RPC | |||
- Write-access to anything besides sending transactions | |||
- Denial-of-Service | |||
- Leakage of secrets | |||
## Denial-of-Service | |||
Attacks may come through the P2P network or the RPC: | |||
- Amplification attacks | |||
- Resource abuse | |||
- Deadlocks and race conditions | |||
- Panics and unhandled errors | |||
## Libraries | |||
- Serialization (Amino) | |||
- Reading/Writing files and databases | |||
- Logging and monitoring | |||
## Cryptography | |||
- Elliptic curves for validator signatures | |||
- Hash algorithms and Merkle trees for block validation | |||
- Authenticated encryption for P2P connections | |||
## Light Client | |||
- Validation of blockchain data structures | |||
- Correctly validating an incorrect proof | |||
- Incorrectly validating a correct proof | |||
- Syncing validator set changes | |||
@ -1,58 +0,0 @@ | |||
# -*- mode: ruby -*- | |||
# vi: set ft=ruby : | |||
Vagrant.configure("2") do |config| | |||
config.vm.box = "ubuntu/xenial64" | |||
config.vm.provider "virtualbox" do |v| | |||
v.memory = 4096 | |||
v.cpus = 2 | |||
end | |||
config.vm.provision "shell", inline: <<-SHELL | |||
apt-get update | |||
# install base requirements | |||
apt-get install -y --no-install-recommends wget curl jq zip \ | |||
make shellcheck bsdmainutils psmisc | |||
apt-get install -y language-pack-en | |||
# install docker | |||
apt-get install -y --no-install-recommends apt-transport-https \ | |||
ca-certificates curl software-properties-common | |||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - | |||
add-apt-repository \ | |||
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \ | |||
$(lsb_release -cs) \ | |||
stable" | |||
apt-get install -y docker-ce | |||
usermod -a -G docker vagrant | |||
# install go | |||
wget -q https://dl.google.com/go/go1.10.1.linux-amd64.tar.gz | |||
tar -xvf go1.10.1.linux-amd64.tar.gz | |||
mv go /usr/local | |||
rm -f go1.10.1.linux-amd64.tar.gz | |||
# cleanup | |||
apt-get autoremove -y | |||
# set env variables | |||
echo 'export GOROOT=/usr/local/go' >> /home/vagrant/.bash_profile | |||
echo 'export GOPATH=/home/vagrant/go' >> /home/vagrant/.bash_profile | |||
echo 'export PATH=$PATH:$GOROOT/bin:$GOPATH/bin' >> /home/vagrant/.bash_profile | |||
echo 'export LC_ALL=en_US.UTF-8' >> /home/vagrant/.bash_profile | |||
echo 'cd go/src/github.com/tendermint/tendermint' >> /home/vagrant/.bash_profile | |||
mkdir -p /home/vagrant/go/bin | |||
mkdir -p /home/vagrant/go/src/github.com/tendermint | |||
ln -s /vagrant /home/vagrant/go/src/github.com/tendermint/tendermint | |||
chown -R vagrant:vagrant /home/vagrant/go | |||
chown vagrant:vagrant /home/vagrant/.bash_profile | |||
# get all deps and tools, ready to install/test | |||
su - vagrant -c 'source /home/vagrant/.bash_profile' | |||
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make get_tools && make get_vendor_deps' | |||
SHELL | |||
end |
@ -1,13 +0,0 @@ | |||
version: 1.0.{build} | |||
configuration: Release | |||
platform: | |||
- x64 | |||
- x86 | |||
clone_folder: c:\go\path\src\github.com\tendermint\tendermint | |||
before_build: | |||
- cmd: set GOPATH=%GOROOT%\path | |||
- cmd: set PATH=%GOPATH%\bin;%PATH% | |||
- cmd: make get_vendor_deps | |||
build_script: | |||
- cmd: make test | |||
test: off |
@ -1,29 +0,0 @@ | |||
package benchmarks | |||
import ( | |||
"sync/atomic" | |||
"testing" | |||
"unsafe" | |||
) | |||
func BenchmarkAtomicUintPtr(b *testing.B) { | |||
b.StopTimer() | |||
pointers := make([]uintptr, 1000) | |||
b.Log(unsafe.Sizeof(pointers[0])) | |||
b.StartTimer() | |||
for j := 0; j < b.N; j++ { | |||
atomic.StoreUintptr(&pointers[j%1000], uintptr(j)) | |||
} | |||
} | |||
func BenchmarkAtomicPointer(b *testing.B) { | |||
b.StopTimer() | |||
pointers := make([]unsafe.Pointer, 1000) | |||
b.Log(unsafe.Sizeof(pointers[0])) | |||
b.StartTimer() | |||
for j := 0; j < b.N; j++ { | |||
atomic.StorePointer(&pointers[j%1000], unsafe.Pointer(uintptr(j))) | |||
} | |||
} |
@ -1,2 +0,0 @@ | |||
data | |||
@ -1,80 +0,0 @@ | |||
#!/bin/bash | |||
DATA=$GOPATH/src/github.com/tendermint/tendermint/benchmarks/blockchain/data | |||
if [ ! -d $DATA ]; then | |||
echo "no data found, generating a chain... (this only has to happen once)" | |||
tendermint init --home $DATA | |||
cp $DATA/config.toml $DATA/config2.toml | |||
echo " | |||
[consensus] | |||
timeout_commit = 0 | |||
" >> $DATA/config.toml | |||
echo "starting node" | |||
tendermint node \ | |||
--home $DATA \ | |||
--proxy_app kvstore \ | |||
--p2p.laddr tcp://127.0.0.1:56656 \ | |||
--rpc.laddr tcp://127.0.0.1:56657 \ | |||
--log_level error & | |||
echo "making blocks for 60s" | |||
sleep 60 | |||
mv $DATA/config2.toml $DATA/config.toml | |||
kill %1 | |||
echo "done generating chain." | |||
fi | |||
# validator node | |||
HOME1=$TMPDIR$RANDOM$RANDOM | |||
cp -R $DATA $HOME1 | |||
echo "starting validator node" | |||
tendermint node \ | |||
--home $HOME1 \ | |||
--proxy_app kvstore \ | |||
--p2p.laddr tcp://127.0.0.1:56656 \ | |||
--rpc.laddr tcp://127.0.0.1:56657 \ | |||
--log_level error & | |||
sleep 1 | |||
# downloader node | |||
HOME2=$TMPDIR$RANDOM$RANDOM | |||
tendermint init --home $HOME2 | |||
cp $HOME1/genesis.json $HOME2 | |||
printf "starting downloader node" | |||
tendermint node \ | |||
--home $HOME2 \ | |||
--proxy_app kvstore \ | |||
--p2p.laddr tcp://127.0.0.1:56666 \ | |||
--rpc.laddr tcp://127.0.0.1:56667 \ | |||
--p2p.persistent_peers 127.0.0.1:56656 \ | |||
--log_level error & | |||
# wait for node to start up so we only count time where we are actually syncing | |||
sleep 0.5 | |||
while curl localhost:56667/status 2> /dev/null | grep "\"latest_block_height\": 0," > /dev/null | |||
do | |||
printf '.' | |||
sleep 0.2 | |||
done | |||
echo | |||
echo "syncing blockchain for 10s" | |||
for i in {1..10} | |||
do | |||
sleep 1 | |||
HEIGHT="$(curl localhost:56667/status 2> /dev/null \ | |||
| grep 'latest_block_height' \ | |||
| grep -o ' [0-9]*' \ | |||
| xargs)" | |||
let 'RATE = HEIGHT / i' | |||
echo "height: $HEIGHT, blocks/sec: $RATE" | |||
done | |||
kill %1 | |||
kill %2 | |||
rm -rf $HOME1 $HOME2 |
@ -1,19 +0,0 @@ | |||
package benchmarks | |||
import ( | |||
"testing" | |||
) | |||
func BenchmarkChanMakeClose(b *testing.B) { | |||
b.StopTimer() | |||
b.StartTimer() | |||
for j := 0; j < b.N; j++ { | |||
foo := make(chan struct{}) | |||
close(foo) | |||
something, ok := <-foo | |||
if ok { | |||
b.Error(something, ok) | |||
} | |||
} | |||
} |
@ -1,129 +0,0 @@ | |||
package benchmarks | |||
import ( | |||
"testing" | |||
"time" | |||
"github.com/tendermint/go-amino" | |||
"github.com/tendermint/go-crypto" | |||
proto "github.com/tendermint/tendermint/benchmarks/proto" | |||
"github.com/tendermint/tendermint/p2p" | |||
ctypes "github.com/tendermint/tendermint/rpc/core/types" | |||
) | |||
func BenchmarkEncodeStatusWire(b *testing.B) { | |||
b.StopTimer() | |||
cdc := amino.NewCodec() | |||
ctypes.RegisterAmino(cdc) | |||
nodeKey := p2p.NodeKey{PrivKey: crypto.GenPrivKeyEd25519()} | |||
status := &ctypes.ResultStatus{ | |||
NodeInfo: p2p.NodeInfo{ | |||
ID: nodeKey.ID(), | |||
Moniker: "SOMENAME", | |||
Network: "SOMENAME", | |||
ListenAddr: "SOMEADDR", | |||
Version: "SOMEVER", | |||
Other: []string{"SOMESTRING", "OTHERSTRING"}, | |||
}, | |||
SyncInfo: ctypes.SyncInfo{ | |||
LatestBlockHash: []byte("SOMEBYTES"), | |||
LatestBlockHeight: 123, | |||
LatestBlockTime: time.Unix(0, 1234), | |||
}, | |||
ValidatorInfo: ctypes.ValidatorInfo{ | |||
PubKey: nodeKey.PubKey(), | |||
}, | |||
} | |||
b.StartTimer() | |||
counter := 0 | |||
for i := 0; i < b.N; i++ { | |||
jsonBytes, err := cdc.MarshalJSON(status) | |||
if err != nil { | |||
panic(err) | |||
} | |||
counter += len(jsonBytes) | |||
} | |||
} | |||
func BenchmarkEncodeNodeInfoWire(b *testing.B) { | |||
b.StopTimer() | |||
cdc := amino.NewCodec() | |||
ctypes.RegisterAmino(cdc) | |||
nodeKey := p2p.NodeKey{PrivKey: crypto.GenPrivKeyEd25519()} | |||
nodeInfo := p2p.NodeInfo{ | |||
ID: nodeKey.ID(), | |||
Moniker: "SOMENAME", | |||
Network: "SOMENAME", | |||
ListenAddr: "SOMEADDR", | |||
Version: "SOMEVER", | |||
Other: []string{"SOMESTRING", "OTHERSTRING"}, | |||
} | |||
b.StartTimer() | |||
counter := 0 | |||
for i := 0; i < b.N; i++ { | |||
jsonBytes, err := cdc.MarshalJSON(nodeInfo) | |||
if err != nil { | |||
panic(err) | |||
} | |||
counter += len(jsonBytes) | |||
} | |||
} | |||
func BenchmarkEncodeNodeInfoBinary(b *testing.B) { | |||
b.StopTimer() | |||
cdc := amino.NewCodec() | |||
ctypes.RegisterAmino(cdc) | |||
nodeKey := p2p.NodeKey{PrivKey: crypto.GenPrivKeyEd25519()} | |||
nodeInfo := p2p.NodeInfo{ | |||
ID: nodeKey.ID(), | |||
Moniker: "SOMENAME", | |||
Network: "SOMENAME", | |||
ListenAddr: "SOMEADDR", | |||
Version: "SOMEVER", | |||
Other: []string{"SOMESTRING", "OTHERSTRING"}, | |||
} | |||
b.StartTimer() | |||
counter := 0 | |||
for i := 0; i < b.N; i++ { | |||
jsonBytes := cdc.MustMarshalBinaryBare(nodeInfo) | |||
counter += len(jsonBytes) | |||
} | |||
} | |||
func BenchmarkEncodeNodeInfoProto(b *testing.B) { | |||
b.StopTimer() | |||
nodeKey := p2p.NodeKey{PrivKey: crypto.GenPrivKeyEd25519()} | |||
nodeID := string(nodeKey.ID()) | |||
someName := "SOMENAME" | |||
someAddr := "SOMEADDR" | |||
someVer := "SOMEVER" | |||
someString := "SOMESTRING" | |||
otherString := "OTHERSTRING" | |||
nodeInfo := proto.NodeInfo{ | |||
Id: &proto.ID{Id: &nodeID}, | |||
Moniker: &someName, | |||
Network: &someName, | |||
ListenAddr: &someAddr, | |||
Version: &someVer, | |||
Other: []string{someString, otherString}, | |||
} | |||
b.StartTimer() | |||
counter := 0 | |||
for i := 0; i < b.N; i++ { | |||
bytes, err := nodeInfo.Marshal() | |||
if err != nil { | |||
b.Fatal(err) | |||
return | |||
} | |||
//jsonBytes := wire.JSONBytes(nodeInfo) | |||
counter += len(bytes) | |||
} | |||
} |
@ -1 +0,0 @@ | |||
package benchmarks |
@ -1,35 +0,0 @@ | |||
package benchmarks | |||
import ( | |||
"testing" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
func BenchmarkSomething(b *testing.B) { | |||
b.StopTimer() | |||
numItems := 100000 | |||
numChecks := 100000 | |||
keys := make([]string, numItems) | |||
for i := 0; i < numItems; i++ { | |||
keys[i] = cmn.RandStr(100) | |||
} | |||
txs := make([]string, numChecks) | |||
for i := 0; i < numChecks; i++ { | |||
txs[i] = cmn.RandStr(100) | |||
} | |||
b.StartTimer() | |||
counter := 0 | |||
for j := 0; j < b.N; j++ { | |||
foo := make(map[string]string) | |||
for _, key := range keys { | |||
foo[key] = key | |||
} | |||
for _, tx := range txs { | |||
if _, ok := foo[tx]; ok { | |||
counter++ | |||
} | |||
} | |||
} | |||
} |
@ -1,33 +0,0 @@ | |||
package benchmarks | |||
import ( | |||
"os" | |||
"testing" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
func BenchmarkFileWrite(b *testing.B) { | |||
b.StopTimer() | |||
file, err := os.OpenFile("benchmark_file_write.out", | |||
os.O_RDWR|os.O_CREATE|os.O_APPEND, 0600) | |||
if err != nil { | |||
b.Error(err) | |||
} | |||
testString := cmn.RandStr(200) + "\n" | |||
b.StartTimer() | |||
for i := 0; i < b.N; i++ { | |||
_, err := file.Write([]byte(testString)) | |||
if err != nil { | |||
b.Error(err) | |||
} | |||
} | |||
if err := file.Close(); err != nil { | |||
b.Error(err) | |||
} | |||
if err := os.Remove("benchmark_file_write.out"); err != nil { | |||
b.Error(err) | |||
} | |||
} |
@ -1,2 +0,0 @@ | |||
Doing some protobuf tests here. | |||
Using gogoprotobuf. |
@ -1,29 +0,0 @@ | |||
message ResultStatus { | |||
optional NodeInfo nodeInfo = 1; | |||
required PubKey pubKey = 2; | |||
required bytes latestBlockHash = 3; | |||
required int64 latestBlockHeight = 4; | |||
required int64 latestBlocktime = 5; | |||
} | |||
message NodeInfo { | |||
required ID id = 1; | |||
required string moniker = 2; | |||
required string network = 3; | |||
required string remoteAddr = 4; | |||
required string listenAddr = 5; | |||
required string version = 6; | |||
repeated string other = 7; | |||
} | |||
message ID { | |||
required string id = 1; | |||
} | |||
message PubKey { | |||
optional PubKeyEd25519 ed25519 = 1; | |||
} | |||
message PubKeyEd25519 { | |||
required bytes bytes = 1; | |||
} |
@ -1,47 +0,0 @@ | |||
package main | |||
import ( | |||
"context" | |||
"encoding/binary" | |||
"fmt" | |||
"time" | |||
rpcclient "github.com/tendermint/tendermint/rpc/lib/client" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
func main() { | |||
wsc := rpcclient.NewWSClient("127.0.0.1:26657", "/websocket") | |||
err := wsc.Start() | |||
if err != nil { | |||
cmn.Exit(err.Error()) | |||
} | |||
defer wsc.Stop() | |||
// Read a bunch of responses | |||
go func() { | |||
for { | |||
_, ok := <-wsc.ResponsesCh | |||
if !ok { | |||
break | |||
} | |||
//fmt.Println("Received response", string(wire.JSONBytes(res))) | |||
} | |||
}() | |||
// Make a bunch of requests | |||
buf := make([]byte, 32) | |||
for i := 0; ; i++ { | |||
binary.BigEndian.PutUint64(buf, uint64(i)) | |||
//txBytes := hex.EncodeToString(buf[:n]) | |||
fmt.Print(".") | |||
err = wsc.Call(context.TODO(), "broadcast_tx", map[string]interface{}{"tx": buf[:8]}) | |||
if err != nil { | |||
cmn.Exit(err.Error()) | |||
} | |||
if i%1000 == 0 { | |||
fmt.Println(i) | |||
} | |||
time.Sleep(time.Microsecond * 1000) | |||
} | |||
} |
@ -1,587 +0,0 @@ | |||
package blockchain | |||
import ( | |||
"errors" | |||
"fmt" | |||
"math" | |||
"sync" | |||
"sync/atomic" | |||
"time" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
flow "github.com/tendermint/tmlibs/flowrate" | |||
"github.com/tendermint/tmlibs/log" | |||
"github.com/tendermint/tendermint/p2p" | |||
"github.com/tendermint/tendermint/types" | |||
) | |||
/* | |||
eg, L = latency = 0.1s | |||
P = num peers = 10 | |||
FN = num full nodes | |||
BS = 1kB block size | |||
CB = 1 Mbit/s = 128 kB/s | |||
CB/P = 12.8 kB | |||
B/S = CB/P/BS = 12.8 blocks/s | |||
12.8 * 0.1 = 1.28 blocks on conn | |||
*/ | |||
const ( | |||
requestIntervalMS = 100 | |||
maxTotalRequesters = 1000 | |||
maxPendingRequests = maxTotalRequesters | |||
maxPendingRequestsPerPeer = 50 | |||
// Minimum recv rate to ensure we're receiving blocks from a peer fast | |||
// enough. If a peer is not sending us data at at least that rate, we | |||
// consider them to have timedout and we disconnect. | |||
// | |||
// Assuming a DSL connection (not a good choice) 128 Kbps (upload) ~ 15 KB/s, | |||
// sending data across atlantic ~ 7.5 KB/s. | |||
minRecvRate = 7680 | |||
// Maximum difference between current and new block's height. | |||
maxDiffBetweenCurrentAndReceivedBlockHeight = 100 | |||
) | |||
var peerTimeout = 15 * time.Second // not const so we can override with tests | |||
/* | |||
Peers self report their heights when we join the block pool. | |||
Starting from our latest pool.height, we request blocks | |||
in sequence from peers that reported higher heights than ours. | |||
Every so often we ask peers what height they're on so we can keep going. | |||
Requests are continuously made for blocks of higher heights until | |||
the limit is reached. If most of the requests have no available peers, and we | |||
are not at peer limits, we can probably switch to consensus reactor | |||
*/ | |||
type BlockPool struct { | |||
cmn.BaseService | |||
startTime time.Time | |||
mtx sync.Mutex | |||
// block requests | |||
requesters map[int64]*bpRequester | |||
height int64 // the lowest key in requesters. | |||
// peers | |||
peers map[p2p.ID]*bpPeer | |||
maxPeerHeight int64 | |||
// atomic | |||
numPending int32 // number of requests pending assignment or block response | |||
requestsCh chan<- BlockRequest | |||
errorsCh chan<- peerError | |||
} | |||
func NewBlockPool(start int64, requestsCh chan<- BlockRequest, errorsCh chan<- peerError) *BlockPool { | |||
bp := &BlockPool{ | |||
peers: make(map[p2p.ID]*bpPeer), | |||
requesters: make(map[int64]*bpRequester), | |||
height: start, | |||
numPending: 0, | |||
requestsCh: requestsCh, | |||
errorsCh: errorsCh, | |||
} | |||
bp.BaseService = *cmn.NewBaseService(nil, "BlockPool", bp) | |||
return bp | |||
} | |||
func (pool *BlockPool) OnStart() error { | |||
go pool.makeRequestersRoutine() | |||
pool.startTime = time.Now() | |||
return nil | |||
} | |||
func (pool *BlockPool) OnStop() {} | |||
// Run spawns requesters as needed. | |||
func (pool *BlockPool) makeRequestersRoutine() { | |||
for { | |||
if !pool.IsRunning() { | |||
break | |||
} | |||
_, numPending, lenRequesters := pool.GetStatus() | |||
if numPending >= maxPendingRequests { | |||
// sleep for a bit. | |||
time.Sleep(requestIntervalMS * time.Millisecond) | |||
// check for timed out peers | |||
pool.removeTimedoutPeers() | |||
} else if lenRequesters >= maxTotalRequesters { | |||
// sleep for a bit. | |||
time.Sleep(requestIntervalMS * time.Millisecond) | |||
// check for timed out peers | |||
pool.removeTimedoutPeers() | |||
} else { | |||
// request for more blocks. | |||
pool.makeNextRequester() | |||
} | |||
} | |||
} | |||
func (pool *BlockPool) removeTimedoutPeers() { | |||
pool.mtx.Lock() | |||
defer pool.mtx.Unlock() | |||
for _, peer := range pool.peers { | |||
if !peer.didTimeout && peer.numPending > 0 { | |||
curRate := peer.recvMonitor.Status().CurRate | |||
// curRate can be 0 on start | |||
if curRate != 0 && curRate < minRecvRate { | |||
err := errors.New("peer is not sending us data fast enough") | |||
pool.sendError(err, peer.id) | |||
pool.Logger.Error("SendTimeout", "peer", peer.id, | |||
"reason", err, | |||
"curRate", fmt.Sprintf("%d KB/s", curRate/1024), | |||
"minRate", fmt.Sprintf("%d KB/s", minRecvRate/1024)) | |||
peer.didTimeout = true | |||
} | |||
} | |||
if peer.didTimeout { | |||
pool.removePeer(peer.id) | |||
} | |||
} | |||
} | |||
func (pool *BlockPool) GetStatus() (height int64, numPending int32, lenRequesters int) { | |||
pool.mtx.Lock() | |||
defer pool.mtx.Unlock() | |||
return pool.height, atomic.LoadInt32(&pool.numPending), len(pool.requesters) | |||
} | |||
// TODO: relax conditions, prevent abuse. | |||
func (pool *BlockPool) IsCaughtUp() bool { | |||
pool.mtx.Lock() | |||
defer pool.mtx.Unlock() | |||
// Need at least 1 peer to be considered caught up. | |||
if len(pool.peers) == 0 { | |||
pool.Logger.Debug("Blockpool has no peers") | |||
return false | |||
} | |||
// some conditions to determine if we're caught up | |||
receivedBlockOrTimedOut := (pool.height > 0 || time.Since(pool.startTime) > 5*time.Second) | |||
ourChainIsLongestAmongPeers := pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight | |||
isCaughtUp := receivedBlockOrTimedOut && ourChainIsLongestAmongPeers | |||
return isCaughtUp | |||
} | |||
// We need to see the second block's Commit to validate the first block. | |||
// So we peek two blocks at a time. | |||
// The caller will verify the commit. | |||
func (pool *BlockPool) PeekTwoBlocks() (first *types.Block, second *types.Block) { | |||
pool.mtx.Lock() | |||
defer pool.mtx.Unlock() | |||
if r := pool.requesters[pool.height]; r != nil { | |||
first = r.getBlock() | |||
} | |||
if r := pool.requesters[pool.height+1]; r != nil { | |||
second = r.getBlock() | |||
} | |||
return | |||
} | |||
// Pop the first block at pool.height | |||
// It must have been validated by 'second'.Commit from PeekTwoBlocks(). | |||
func (pool *BlockPool) PopRequest() { | |||
pool.mtx.Lock() | |||
defer pool.mtx.Unlock() | |||
if r := pool.requesters[pool.height]; r != nil { | |||
/* The block can disappear at any time, due to removePeer(). | |||
if r := pool.requesters[pool.height]; r == nil || r.block == nil { | |||
PanicSanity("PopRequest() requires a valid block") | |||
} | |||
*/ | |||
r.Stop() | |||
delete(pool.requesters, pool.height) | |||
pool.height++ | |||
} else { | |||
panic(fmt.Sprintf("Expected requester to pop, got nothing at height %v", pool.height)) | |||
} | |||
} | |||
// Invalidates the block at pool.height, | |||
// Remove the peer and redo request from others. | |||
// Returns the ID of the removed peer. | |||
func (pool *BlockPool) RedoRequest(height int64) p2p.ID { | |||
pool.mtx.Lock() | |||
defer pool.mtx.Unlock() | |||
request := pool.requesters[height] | |||
if request.block == nil { | |||
panic("Expected block to be non-nil") | |||
} | |||
// RemovePeer will redo all requesters associated with this peer. | |||
pool.removePeer(request.peerID) | |||
return request.peerID | |||
} | |||
// TODO: ensure that blocks come in order for each peer. | |||
func (pool *BlockPool) AddBlock(peerID p2p.ID, block *types.Block, blockSize int) { | |||
pool.mtx.Lock() | |||
defer pool.mtx.Unlock() | |||
requester := pool.requesters[block.Height] | |||
if requester == nil { | |||
pool.Logger.Info("peer sent us a block we didn't expect", "peer", peerID, "curHeight", pool.height, "blockHeight", block.Height) | |||
diff := pool.height - block.Height | |||
if diff < 0 { | |||
diff *= -1 | |||
} | |||
if diff > maxDiffBetweenCurrentAndReceivedBlockHeight { | |||
pool.sendError(errors.New("peer sent us a block we didn't expect with a height too far ahead/behind"), peerID) | |||
} | |||
return | |||
} | |||
if requester.setBlock(block, peerID) { | |||
atomic.AddInt32(&pool.numPending, -1) | |||
peer := pool.peers[peerID] | |||
if peer != nil { | |||
peer.decrPending(blockSize) | |||
} | |||
} else { | |||
// Bad peer? | |||
} | |||
} | |||
// MaxPeerHeight returns the highest height reported by a peer. | |||
func (pool *BlockPool) MaxPeerHeight() int64 { | |||
pool.mtx.Lock() | |||
defer pool.mtx.Unlock() | |||
return pool.maxPeerHeight | |||
} | |||
// Sets the peer's alleged blockchain height. | |||
func (pool *BlockPool) SetPeerHeight(peerID p2p.ID, height int64) { | |||
pool.mtx.Lock() | |||
defer pool.mtx.Unlock() | |||
peer := pool.peers[peerID] | |||
if peer != nil { | |||
peer.height = height | |||
} else { | |||
peer = newBPPeer(pool, peerID, height) | |||
peer.setLogger(pool.Logger.With("peer", peerID)) | |||
pool.peers[peerID] = peer | |||
} | |||
if height > pool.maxPeerHeight { | |||
pool.maxPeerHeight = height | |||
} | |||
} | |||
func (pool *BlockPool) RemovePeer(peerID p2p.ID) { | |||
pool.mtx.Lock() | |||
defer pool.mtx.Unlock() | |||
pool.removePeer(peerID) | |||
} | |||
func (pool *BlockPool) removePeer(peerID p2p.ID) { | |||
for _, requester := range pool.requesters { | |||
if requester.getPeerID() == peerID { | |||
requester.redo() | |||
} | |||
} | |||
delete(pool.peers, peerID) | |||
} | |||
// Pick an available peer with at least the given minHeight. | |||
// If no peers are available, returns nil. | |||
func (pool *BlockPool) pickIncrAvailablePeer(minHeight int64) *bpPeer { | |||
pool.mtx.Lock() | |||
defer pool.mtx.Unlock() | |||
for _, peer := range pool.peers { | |||
if peer.didTimeout { | |||
pool.removePeer(peer.id) | |||
continue | |||
} | |||
if peer.numPending >= maxPendingRequestsPerPeer { | |||
continue | |||
} | |||
if peer.height < minHeight { | |||
continue | |||
} | |||
peer.incrPending() | |||
return peer | |||
} | |||
return nil | |||
} | |||
func (pool *BlockPool) makeNextRequester() { | |||
pool.mtx.Lock() | |||
defer pool.mtx.Unlock() | |||
nextHeight := pool.height + pool.requestersLen() | |||
request := newBPRequester(pool, nextHeight) | |||
// request.SetLogger(pool.Logger.With("height", nextHeight)) | |||
pool.requesters[nextHeight] = request | |||
atomic.AddInt32(&pool.numPending, 1) | |||
err := request.Start() | |||
if err != nil { | |||
request.Logger.Error("Error starting request", "err", err) | |||
} | |||
} | |||
func (pool *BlockPool) requestersLen() int64 { | |||
return int64(len(pool.requesters)) | |||
} | |||
func (pool *BlockPool) sendRequest(height int64, peerID p2p.ID) { | |||
if !pool.IsRunning() { | |||
return | |||
} | |||
pool.requestsCh <- BlockRequest{height, peerID} | |||
} | |||
func (pool *BlockPool) sendError(err error, peerID p2p.ID) { | |||
if !pool.IsRunning() { | |||
return | |||
} | |||
pool.errorsCh <- peerError{err, peerID} | |||
} | |||
// unused by tendermint; left for debugging purposes | |||
func (pool *BlockPool) debug() string { | |||
pool.mtx.Lock() | |||
defer pool.mtx.Unlock() | |||
str := "" | |||
nextHeight := pool.height + pool.requestersLen() | |||
for h := pool.height; h < nextHeight; h++ { | |||
if pool.requesters[h] == nil { | |||
str += cmn.Fmt("H(%v):X ", h) | |||
} else { | |||
str += cmn.Fmt("H(%v):", h) | |||
str += cmn.Fmt("B?(%v) ", pool.requesters[h].block != nil) | |||
} | |||
} | |||
return str | |||
} | |||
//------------------------------------- | |||
type bpPeer struct { | |||
pool *BlockPool | |||
id p2p.ID | |||
recvMonitor *flow.Monitor | |||
height int64 | |||
numPending int32 | |||
timeout *time.Timer | |||
didTimeout bool | |||
logger log.Logger | |||
} | |||
func newBPPeer(pool *BlockPool, peerID p2p.ID, height int64) *bpPeer { | |||
peer := &bpPeer{ | |||
pool: pool, | |||
id: peerID, | |||
height: height, | |||
numPending: 0, | |||
logger: log.NewNopLogger(), | |||
} | |||
return peer | |||
} | |||
func (peer *bpPeer) setLogger(l log.Logger) { | |||
peer.logger = l | |||
} | |||
func (peer *bpPeer) resetMonitor() { | |||
peer.recvMonitor = flow.New(time.Second, time.Second*40) | |||
initialValue := float64(minRecvRate) * math.E | |||
peer.recvMonitor.SetREMA(initialValue) | |||
} | |||
func (peer *bpPeer) resetTimeout() { | |||
if peer.timeout == nil { | |||
peer.timeout = time.AfterFunc(peerTimeout, peer.onTimeout) | |||
} else { | |||
peer.timeout.Reset(peerTimeout) | |||
} | |||
} | |||
func (peer *bpPeer) incrPending() { | |||
if peer.numPending == 0 { | |||
peer.resetMonitor() | |||
peer.resetTimeout() | |||
} | |||
peer.numPending++ | |||
} | |||
func (peer *bpPeer) decrPending(recvSize int) { | |||
peer.numPending-- | |||
if peer.numPending == 0 { | |||
peer.timeout.Stop() | |||
} else { | |||
peer.recvMonitor.Update(recvSize) | |||
peer.resetTimeout() | |||
} | |||
} | |||
func (peer *bpPeer) onTimeout() { | |||
peer.pool.mtx.Lock() | |||
defer peer.pool.mtx.Unlock() | |||
err := errors.New("peer did not send us anything") | |||
peer.pool.sendError(err, peer.id) | |||
peer.logger.Error("SendTimeout", "reason", err, "timeout", peerTimeout) | |||
peer.didTimeout = true | |||
} | |||
//------------------------------------- | |||
type bpRequester struct { | |||
cmn.BaseService | |||
pool *BlockPool | |||
height int64 | |||
gotBlockCh chan struct{} | |||
redoCh chan struct{} | |||
mtx sync.Mutex | |||
peerID p2p.ID | |||
block *types.Block | |||
} | |||
func newBPRequester(pool *BlockPool, height int64) *bpRequester { | |||
bpr := &bpRequester{ | |||
pool: pool, | |||
height: height, | |||
gotBlockCh: make(chan struct{}, 1), | |||
redoCh: make(chan struct{}, 1), | |||
peerID: "", | |||
block: nil, | |||
} | |||
bpr.BaseService = *cmn.NewBaseService(nil, "bpRequester", bpr) | |||
return bpr | |||
} | |||
func (bpr *bpRequester) OnStart() error { | |||
go bpr.requestRoutine() | |||
return nil | |||
} | |||
// Returns true if the peer matches and block doesn't already exist. | |||
func (bpr *bpRequester) setBlock(block *types.Block, peerID p2p.ID) bool { | |||
bpr.mtx.Lock() | |||
if bpr.block != nil || bpr.peerID != peerID { | |||
bpr.mtx.Unlock() | |||
return false | |||
} | |||
bpr.block = block | |||
bpr.mtx.Unlock() | |||
select { | |||
case bpr.gotBlockCh <- struct{}{}: | |||
default: | |||
} | |||
return true | |||
} | |||
func (bpr *bpRequester) getBlock() *types.Block { | |||
bpr.mtx.Lock() | |||
defer bpr.mtx.Unlock() | |||
return bpr.block | |||
} | |||
func (bpr *bpRequester) getPeerID() p2p.ID { | |||
bpr.mtx.Lock() | |||
defer bpr.mtx.Unlock() | |||
return bpr.peerID | |||
} | |||
// This is called from the requestRoutine, upon redo(). | |||
func (bpr *bpRequester) reset() { | |||
bpr.mtx.Lock() | |||
defer bpr.mtx.Unlock() | |||
if bpr.block != nil { | |||
atomic.AddInt32(&bpr.pool.numPending, 1) | |||
} | |||
bpr.peerID = "" | |||
bpr.block = nil | |||
} | |||
// Tells bpRequester to pick another peer and try again. | |||
// NOTE: Nonblocking, and does nothing if another redo | |||
// was already requested. | |||
func (bpr *bpRequester) redo() { | |||
select { | |||
case bpr.redoCh <- struct{}{}: | |||
default: | |||
} | |||
} | |||
// Responsible for making more requests as necessary | |||
// Returns only when a block is found (e.g. AddBlock() is called) | |||
func (bpr *bpRequester) requestRoutine() { | |||
OUTER_LOOP: | |||
for { | |||
// Pick a peer to send request to. | |||
var peer *bpPeer | |||
PICK_PEER_LOOP: | |||
for { | |||
if !bpr.IsRunning() || !bpr.pool.IsRunning() { | |||
return | |||
} | |||
peer = bpr.pool.pickIncrAvailablePeer(bpr.height) | |||
if peer == nil { | |||
//log.Info("No peers available", "height", height) | |||
time.Sleep(requestIntervalMS * time.Millisecond) | |||
continue PICK_PEER_LOOP | |||
} | |||
break PICK_PEER_LOOP | |||
} | |||
bpr.mtx.Lock() | |||
bpr.peerID = peer.id | |||
bpr.mtx.Unlock() | |||
// Send request and wait. | |||
bpr.pool.sendRequest(bpr.height, peer.id) | |||
WAIT_LOOP: | |||
for { | |||
select { | |||
case <-bpr.pool.Quit(): | |||
bpr.Stop() | |||
return | |||
case <-bpr.Quit(): | |||
return | |||
case <-bpr.redoCh: | |||
bpr.reset() | |||
continue OUTER_LOOP | |||
case <-bpr.gotBlockCh: | |||
// We got a block! | |||
// Continue the for-loop and wait til Quit. | |||
continue WAIT_LOOP | |||
} | |||
} | |||
} | |||
} | |||
//------------------------------------- | |||
type BlockRequest struct { | |||
Height int64 | |||
PeerID p2p.ID | |||
} |
@ -1,148 +0,0 @@ | |||
package blockchain | |||
import ( | |||
"math/rand" | |||
"testing" | |||
"time" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
"github.com/tendermint/tmlibs/log" | |||
"github.com/tendermint/tendermint/p2p" | |||
"github.com/tendermint/tendermint/types" | |||
) | |||
func init() { | |||
peerTimeout = 2 * time.Second | |||
} | |||
type testPeer struct { | |||
id p2p.ID | |||
height int64 | |||
} | |||
func makePeers(numPeers int, minHeight, maxHeight int64) map[p2p.ID]testPeer { | |||
peers := make(map[p2p.ID]testPeer, numPeers) | |||
for i := 0; i < numPeers; i++ { | |||
peerID := p2p.ID(cmn.RandStr(12)) | |||
height := minHeight + rand.Int63n(maxHeight-minHeight) | |||
peers[peerID] = testPeer{peerID, height} | |||
} | |||
return peers | |||
} | |||
func TestBasic(t *testing.T) { | |||
start := int64(42) | |||
peers := makePeers(10, start+1, 1000) | |||
errorsCh := make(chan peerError, 1000) | |||
requestsCh := make(chan BlockRequest, 1000) | |||
pool := NewBlockPool(start, requestsCh, errorsCh) | |||
pool.SetLogger(log.TestingLogger()) | |||
err := pool.Start() | |||
if err != nil { | |||
t.Error(err) | |||
} | |||
defer pool.Stop() | |||
// Introduce each peer. | |||
go func() { | |||
for _, peer := range peers { | |||
pool.SetPeerHeight(peer.id, peer.height) | |||
} | |||
}() | |||
// Start a goroutine to pull blocks | |||
go func() { | |||
for { | |||
if !pool.IsRunning() { | |||
return | |||
} | |||
first, second := pool.PeekTwoBlocks() | |||
if first != nil && second != nil { | |||
pool.PopRequest() | |||
} else { | |||
time.Sleep(1 * time.Second) | |||
} | |||
} | |||
}() | |||
// Pull from channels | |||
for { | |||
select { | |||
case err := <-errorsCh: | |||
t.Error(err) | |||
case request := <-requestsCh: | |||
t.Logf("Pulled new BlockRequest %v", request) | |||
if request.Height == 300 { | |||
return // Done! | |||
} | |||
// Request desired, pretend like we got the block immediately. | |||
go func() { | |||
block := &types.Block{Header: &types.Header{Height: request.Height}} | |||
pool.AddBlock(request.PeerID, block, 123) | |||
t.Logf("Added block from peer %v (height: %v)", request.PeerID, request.Height) | |||
}() | |||
} | |||
} | |||
} | |||
func TestTimeout(t *testing.T) { | |||
start := int64(42) | |||
peers := makePeers(10, start+1, 1000) | |||
errorsCh := make(chan peerError, 1000) | |||
requestsCh := make(chan BlockRequest, 1000) | |||
pool := NewBlockPool(start, requestsCh, errorsCh) | |||
pool.SetLogger(log.TestingLogger()) | |||
err := pool.Start() | |||
if err != nil { | |||
t.Error(err) | |||
} | |||
defer pool.Stop() | |||
for _, peer := range peers { | |||
t.Logf("Peer %v", peer.id) | |||
} | |||
// Introduce each peer. | |||
go func() { | |||
for _, peer := range peers { | |||
pool.SetPeerHeight(peer.id, peer.height) | |||
} | |||
}() | |||
// Start a goroutine to pull blocks | |||
go func() { | |||
for { | |||
if !pool.IsRunning() { | |||
return | |||
} | |||
first, second := pool.PeekTwoBlocks() | |||
if first != nil && second != nil { | |||
pool.PopRequest() | |||
} else { | |||
time.Sleep(1 * time.Second) | |||
} | |||
} | |||
}() | |||
// Pull from channels | |||
counter := 0 | |||
timedOut := map[p2p.ID]struct{}{} | |||
for { | |||
select { | |||
case err := <-errorsCh: | |||
t.Log(err) | |||
// consider error to be always timeout here | |||
if _, ok := timedOut[err.peerID]; !ok { | |||
counter++ | |||
if counter == len(peers) { | |||
return // Done! | |||
} | |||
} | |||
case request := <-requestsCh: | |||
t.Logf("Pulled new BlockRequest %+v", request) | |||
} | |||
} | |||
} |
@ -1,405 +0,0 @@ | |||
package blockchain | |||
import ( | |||
"fmt" | |||
"reflect" | |||
"time" | |||
"github.com/tendermint/go-amino" | |||
"github.com/tendermint/tendermint/p2p" | |||
sm "github.com/tendermint/tendermint/state" | |||
"github.com/tendermint/tendermint/types" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
"github.com/tendermint/tmlibs/log" | |||
) | |||
const ( | |||
// BlockchainChannel is a channel for blocks and status updates (`BlockStore` height) | |||
BlockchainChannel = byte(0x40) | |||
trySyncIntervalMS = 50 | |||
// stop syncing when last block's time is | |||
// within this much of the system time. | |||
// stopSyncingDurationMinutes = 10 | |||
// ask for best height every 10s | |||
statusUpdateIntervalSeconds = 10 | |||
// check if we should switch to consensus reactor | |||
switchToConsensusIntervalSeconds = 1 | |||
// NOTE: keep up to date with bcBlockResponseMessage | |||
bcBlockResponseMessagePrefixSize = 4 | |||
bcBlockResponseMessageFieldKeySize = 1 | |||
maxMsgSize = types.MaxBlockSizeBytes + | |||
bcBlockResponseMessagePrefixSize + | |||
bcBlockResponseMessageFieldKeySize | |||
) | |||
type consensusReactor interface { | |||
// for when we switch from blockchain reactor and fast sync to | |||
// the consensus machine | |||
SwitchToConsensus(sm.State, int) | |||
} | |||
type peerError struct { | |||
err error | |||
peerID p2p.ID | |||
} | |||
func (e peerError) Error() string { | |||
return fmt.Sprintf("error with peer %v: %s", e.peerID, e.err.Error()) | |||
} | |||
// BlockchainReactor handles long-term catchup syncing. | |||
type BlockchainReactor struct { | |||
p2p.BaseReactor | |||
// immutable | |||
initialState sm.State | |||
blockExec *sm.BlockExecutor | |||
store *BlockStore | |||
pool *BlockPool | |||
fastSync bool | |||
requestsCh <-chan BlockRequest | |||
errorsCh <-chan peerError | |||
} | |||
// NewBlockchainReactor returns new reactor instance. | |||
func NewBlockchainReactor(state sm.State, blockExec *sm.BlockExecutor, store *BlockStore, | |||
fastSync bool) *BlockchainReactor { | |||
if state.LastBlockHeight != store.Height() { | |||
panic(fmt.Sprintf("state (%v) and store (%v) height mismatch", state.LastBlockHeight, | |||
store.Height())) | |||
} | |||
const capacity = 1000 // must be bigger than peers count | |||
requestsCh := make(chan BlockRequest, capacity) | |||
errorsCh := make(chan peerError, capacity) // so we don't block in #Receive#pool.AddBlock | |||
pool := NewBlockPool( | |||
store.Height()+1, | |||
requestsCh, | |||
errorsCh, | |||
) | |||
bcR := &BlockchainReactor{ | |||
initialState: state, | |||
blockExec: blockExec, | |||
store: store, | |||
pool: pool, | |||
fastSync: fastSync, | |||
requestsCh: requestsCh, | |||
errorsCh: errorsCh, | |||
} | |||
bcR.BaseReactor = *p2p.NewBaseReactor("BlockchainReactor", bcR) | |||
return bcR | |||
} | |||
// SetLogger implements cmn.Service by setting the logger on reactor and pool. | |||
func (bcR *BlockchainReactor) SetLogger(l log.Logger) { | |||
bcR.BaseService.Logger = l | |||
bcR.pool.Logger = l | |||
} | |||
// OnStart implements cmn.Service. | |||
func (bcR *BlockchainReactor) OnStart() error { | |||
if err := bcR.BaseReactor.OnStart(); err != nil { | |||
return err | |||
} | |||
if bcR.fastSync { | |||
err := bcR.pool.Start() | |||
if err != nil { | |||
return err | |||
} | |||
go bcR.poolRoutine() | |||
} | |||
return nil | |||
} | |||
// OnStop implements cmn.Service. | |||
func (bcR *BlockchainReactor) OnStop() { | |||
bcR.BaseReactor.OnStop() | |||
bcR.pool.Stop() | |||
} | |||
// GetChannels implements Reactor | |||
func (bcR *BlockchainReactor) GetChannels() []*p2p.ChannelDescriptor { | |||
return []*p2p.ChannelDescriptor{ | |||
{ | |||
ID: BlockchainChannel, | |||
Priority: 10, | |||
SendQueueCapacity: 1000, | |||
RecvBufferCapacity: 50 * 4096, | |||
RecvMessageCapacity: maxMsgSize, | |||
}, | |||
} | |||
} | |||
// AddPeer implements Reactor by sending our state to peer. | |||
func (bcR *BlockchainReactor) AddPeer(peer p2p.Peer) { | |||
msgBytes := cdc.MustMarshalBinaryBare(&bcStatusResponseMessage{bcR.store.Height()}) | |||
if !peer.Send(BlockchainChannel, msgBytes) { | |||
// doing nothing, will try later in `poolRoutine` | |||
} | |||
// peer is added to the pool once we receive the first | |||
// bcStatusResponseMessage from the peer and call pool.SetPeerHeight | |||
} | |||
// RemovePeer implements Reactor by removing peer from the pool. | |||
func (bcR *BlockchainReactor) RemovePeer(peer p2p.Peer, reason interface{}) { | |||
bcR.pool.RemovePeer(peer.ID()) | |||
} | |||
// respondToPeer loads a block and sends it to the requesting peer, | |||
// if we have it. Otherwise, we'll respond saying we don't have it. | |||
// According to the Tendermint spec, if all nodes are honest, | |||
// no node should be requesting for a block that's non-existent. | |||
func (bcR *BlockchainReactor) respondToPeer(msg *bcBlockRequestMessage, | |||
src p2p.Peer) (queued bool) { | |||
block := bcR.store.LoadBlock(msg.Height) | |||
if block != nil { | |||
msgBytes := cdc.MustMarshalBinaryBare(&bcBlockResponseMessage{Block: block}) | |||
return src.TrySend(BlockchainChannel, msgBytes) | |||
} | |||
bcR.Logger.Info("Peer asking for a block we don't have", "src", src, "height", msg.Height) | |||
msgBytes := cdc.MustMarshalBinaryBare(&bcNoBlockResponseMessage{Height: msg.Height}) | |||
return src.TrySend(BlockchainChannel, msgBytes) | |||
} | |||
// Receive implements Reactor by handling 4 types of messages (look below). | |||
func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) { | |||
msg, err := DecodeMessage(msgBytes) | |||
if err != nil { | |||
bcR.Logger.Error("Error decoding message", "src", src, "chId", chID, "msg", msg, "err", err, "bytes", msgBytes) | |||
bcR.Switch.StopPeerForError(src, err) | |||
return | |||
} | |||
bcR.Logger.Debug("Receive", "src", src, "chID", chID, "msg", msg) | |||
switch msg := msg.(type) { | |||
case *bcBlockRequestMessage: | |||
if queued := bcR.respondToPeer(msg, src); !queued { | |||
// Unfortunately not queued since the queue is full. | |||
} | |||
case *bcBlockResponseMessage: | |||
// Got a block. | |||
bcR.pool.AddBlock(src.ID(), msg.Block, len(msgBytes)) | |||
case *bcStatusRequestMessage: | |||
// Send peer our state. | |||
msgBytes := cdc.MustMarshalBinaryBare(&bcStatusResponseMessage{bcR.store.Height()}) | |||
queued := src.TrySend(BlockchainChannel, msgBytes) | |||
if !queued { | |||
// sorry | |||
} | |||
case *bcStatusResponseMessage: | |||
// Got a peer status. Unverified. | |||
bcR.pool.SetPeerHeight(src.ID(), msg.Height) | |||
default: | |||
bcR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg))) | |||
} | |||
} | |||
// Handle messages from the poolReactor telling the reactor what to do. | |||
// NOTE: Don't sleep in the FOR_LOOP or otherwise slow it down! | |||
// (Except for the SYNC_LOOP, which is the primary purpose and must be synchronous.) | |||
func (bcR *BlockchainReactor) poolRoutine() { | |||
trySyncTicker := time.NewTicker(trySyncIntervalMS * time.Millisecond) | |||
statusUpdateTicker := time.NewTicker(statusUpdateIntervalSeconds * time.Second) | |||
switchToConsensusTicker := time.NewTicker(switchToConsensusIntervalSeconds * time.Second) | |||
blocksSynced := 0 | |||
chainID := bcR.initialState.ChainID | |||
state := bcR.initialState | |||
lastHundred := time.Now() | |||
lastRate := 0.0 | |||
FOR_LOOP: | |||
for { | |||
select { | |||
case request := <-bcR.requestsCh: | |||
peer := bcR.Switch.Peers().Get(request.PeerID) | |||
if peer == nil { | |||
continue FOR_LOOP // Peer has since been disconnected. | |||
} | |||
msgBytes := cdc.MustMarshalBinaryBare(&bcBlockRequestMessage{request.Height}) | |||
queued := peer.TrySend(BlockchainChannel, msgBytes) | |||
if !queued { | |||
// We couldn't make the request, send-queue full. | |||
// The pool handles timeouts, just let it go. | |||
continue FOR_LOOP | |||
} | |||
case err := <-bcR.errorsCh: | |||
peer := bcR.Switch.Peers().Get(err.peerID) | |||
if peer != nil { | |||
bcR.Switch.StopPeerForError(peer, err) | |||
} | |||
case <-statusUpdateTicker.C: | |||
// ask for status updates | |||
go bcR.BroadcastStatusRequest() // nolint: errcheck | |||
case <-switchToConsensusTicker.C: | |||
height, numPending, lenRequesters := bcR.pool.GetStatus() | |||
outbound, inbound, _ := bcR.Switch.NumPeers() | |||
bcR.Logger.Debug("Consensus ticker", "numPending", numPending, "total", lenRequesters, | |||
"outbound", outbound, "inbound", inbound) | |||
if bcR.pool.IsCaughtUp() { | |||
bcR.Logger.Info("Time to switch to consensus reactor!", "height", height) | |||
bcR.pool.Stop() | |||
conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor) | |||
conR.SwitchToConsensus(state, blocksSynced) | |||
break FOR_LOOP | |||
} | |||
case <-trySyncTicker.C: // chan time | |||
// This loop can be slow as long as it's doing syncing work. | |||
SYNC_LOOP: | |||
for i := 0; i < 10; i++ { | |||
// See if there are any blocks to sync. | |||
first, second := bcR.pool.PeekTwoBlocks() | |||
//bcR.Logger.Info("TrySync peeked", "first", first, "second", second) | |||
if first == nil || second == nil { | |||
// We need both to sync the first block. | |||
break SYNC_LOOP | |||
} | |||
firstParts := first.MakePartSet(state.ConsensusParams.BlockPartSizeBytes) | |||
firstPartsHeader := firstParts.Header() | |||
firstID := types.BlockID{first.Hash(), firstPartsHeader} | |||
// Finally, verify the first block using the second's commit | |||
// NOTE: we can probably make this more efficient, but note that calling | |||
// first.Hash() doesn't verify the tx contents, so MakePartSet() is | |||
// currently necessary. | |||
err := state.Validators.VerifyCommit( | |||
chainID, firstID, first.Height, second.LastCommit) | |||
if err != nil { | |||
bcR.Logger.Error("Error in validation", "err", err) | |||
peerID := bcR.pool.RedoRequest(first.Height) | |||
peer := bcR.Switch.Peers().Get(peerID) | |||
if peer != nil { | |||
bcR.Switch.StopPeerForError(peer, fmt.Errorf("BlockchainReactor validation error: %v", err)) | |||
} | |||
break SYNC_LOOP | |||
} else { | |||
bcR.pool.PopRequest() | |||
// TODO: batch saves so we dont persist to disk every block | |||
bcR.store.SaveBlock(first, firstParts, second.LastCommit) | |||
// TODO: same thing for app - but we would need a way to | |||
// get the hash without persisting the state | |||
var err error | |||
state, err = bcR.blockExec.ApplyBlock(state, firstID, first) | |||
if err != nil { | |||
// TODO This is bad, are we zombie? | |||
cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v", | |||
first.Height, first.Hash(), err)) | |||
} | |||
blocksSynced++ | |||
if blocksSynced%100 == 0 { | |||
lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds()) | |||
bcR.Logger.Info("Fast Sync Rate", "height", bcR.pool.height, | |||
"max_peer_height", bcR.pool.MaxPeerHeight(), "blocks/s", lastRate) | |||
lastHundred = time.Now() | |||
} | |||
} | |||
} | |||
continue FOR_LOOP | |||
case <-bcR.Quit(): | |||
break FOR_LOOP | |||
} | |||
} | |||
} | |||
// BroadcastStatusRequest broadcasts `BlockStore` height. | |||
func (bcR *BlockchainReactor) BroadcastStatusRequest() error { | |||
msgBytes := cdc.MustMarshalBinaryBare(&bcStatusRequestMessage{bcR.store.Height()}) | |||
bcR.Switch.Broadcast(BlockchainChannel, msgBytes) | |||
return nil | |||
} | |||
//----------------------------------------------------------------------------- | |||
// Messages | |||
// BlockchainMessage is a generic message for this reactor. | |||
type BlockchainMessage interface{} | |||
func RegisterBlockchainMessages(cdc *amino.Codec) { | |||
cdc.RegisterInterface((*BlockchainMessage)(nil), nil) | |||
cdc.RegisterConcrete(&bcBlockRequestMessage{}, "tendermint/mempool/BlockRequest", nil) | |||
cdc.RegisterConcrete(&bcBlockResponseMessage{}, "tendermint/mempool/BlockResponse", nil) | |||
cdc.RegisterConcrete(&bcNoBlockResponseMessage{}, "tendermint/mempool/NoBlockResponse", nil) | |||
cdc.RegisterConcrete(&bcStatusResponseMessage{}, "tendermint/mempool/StatusResponse", nil) | |||
cdc.RegisterConcrete(&bcStatusRequestMessage{}, "tendermint/mempool/StatusRequest", nil) | |||
} | |||
// DecodeMessage decodes BlockchainMessage. | |||
// TODO: ensure that bz is completely read. | |||
func DecodeMessage(bz []byte) (msg BlockchainMessage, err error) { | |||
if len(bz) > maxMsgSize { | |||
return msg, fmt.Errorf("Msg exceeds max size (%d > %d)", | |||
len(bz), maxMsgSize) | |||
} | |||
err = cdc.UnmarshalBinaryBare(bz, &msg) | |||
if err != nil { | |||
err = cmn.ErrorWrap(err, "DecodeMessage() had bytes left over") | |||
} | |||
return | |||
} | |||
//------------------------------------- | |||
type bcBlockRequestMessage struct { | |||
Height int64 | |||
} | |||
func (m *bcBlockRequestMessage) String() string { | |||
return cmn.Fmt("[bcBlockRequestMessage %v]", m.Height) | |||
} | |||
type bcNoBlockResponseMessage struct { | |||
Height int64 | |||
} | |||
func (brm *bcNoBlockResponseMessage) String() string { | |||
return cmn.Fmt("[bcNoBlockResponseMessage %d]", brm.Height) | |||
} | |||
//------------------------------------- | |||
type bcBlockResponseMessage struct { | |||
Block *types.Block | |||
} | |||
func (m *bcBlockResponseMessage) String() string { | |||
return cmn.Fmt("[bcBlockResponseMessage %v]", m.Block.Height) | |||
} | |||
//------------------------------------- | |||
type bcStatusRequestMessage struct { | |||
Height int64 | |||
} | |||
func (m *bcStatusRequestMessage) String() string { | |||
return cmn.Fmt("[bcStatusRequestMessage %v]", m.Height) | |||
} | |||
//------------------------------------- | |||
type bcStatusResponseMessage struct { | |||
Height int64 | |||
} | |||
func (m *bcStatusResponseMessage) String() string { | |||
return cmn.Fmt("[bcStatusResponseMessage %v]", m.Height) | |||
} |
@ -1,208 +0,0 @@ | |||
package blockchain | |||
import ( | |||
"net" | |||
"testing" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
dbm "github.com/tendermint/tmlibs/db" | |||
"github.com/tendermint/tmlibs/log" | |||
cfg "github.com/tendermint/tendermint/config" | |||
"github.com/tendermint/tendermint/p2p" | |||
"github.com/tendermint/tendermint/proxy" | |||
sm "github.com/tendermint/tendermint/state" | |||
"github.com/tendermint/tendermint/types" | |||
) | |||
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) { | |||
config := cfg.ResetTestRoot("blockchain_reactor_test") | |||
// blockDB := dbm.NewDebugDB("blockDB", dbm.NewMemDB()) | |||
// stateDB := dbm.NewDebugDB("stateDB", dbm.NewMemDB()) | |||
blockDB := dbm.NewMemDB() | |||
stateDB := dbm.NewMemDB() | |||
blockStore := NewBlockStore(blockDB) | |||
state, err := sm.LoadStateFromDBOrGenesisFile(stateDB, config.GenesisFile()) | |||
if err != nil { | |||
panic(cmn.ErrorWrap(err, "error constructing state from genesis file")) | |||
} | |||
return state, blockStore | |||
} | |||
func newBlockchainReactor(logger log.Logger, maxBlockHeight int64) *BlockchainReactor { | |||
state, blockStore := makeStateAndBlockStore(logger) | |||
// Make the blockchainReactor itself | |||
fastSync := true | |||
var nilApp proxy.AppConnConsensus | |||
blockExec := sm.NewBlockExecutor(dbm.NewMemDB(), log.TestingLogger(), nilApp, | |||
sm.MockMempool{}, sm.MockEvidencePool{}) | |||
bcReactor := NewBlockchainReactor(state.Copy(), blockExec, blockStore, fastSync) | |||
bcReactor.SetLogger(logger.With("module", "blockchain")) | |||
// Next: we need to set a switch in order for peers to be added in | |||
bcReactor.Switch = p2p.NewSwitch(cfg.DefaultP2PConfig()) | |||
// Lastly: let's add some blocks in | |||
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ { | |||
firstBlock := makeBlock(blockHeight, state) | |||
secondBlock := makeBlock(blockHeight+1, state) | |||
firstParts := firstBlock.MakePartSet(state.ConsensusParams.BlockGossip.BlockPartSizeBytes) | |||
blockStore.SaveBlock(firstBlock, firstParts, secondBlock.LastCommit) | |||
} | |||
return bcReactor | |||
} | |||
func TestNoBlockResponse(t *testing.T) { | |||
maxBlockHeight := int64(20) | |||
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight) | |||
bcr.Start() | |||
defer bcr.Stop() | |||
// Add some peers in | |||
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12))) | |||
bcr.AddPeer(peer) | |||
chID := byte(0x01) | |||
tests := []struct { | |||
height int64 | |||
existent bool | |||
}{ | |||
{maxBlockHeight + 2, false}, | |||
{10, true}, | |||
{1, true}, | |||
{100, false}, | |||
} | |||
// receive a request message from peer, | |||
// wait for our response to be received on the peer | |||
for _, tt := range tests { | |||
reqBlockMsg := &bcBlockRequestMessage{tt.height} | |||
reqBlockBytes := cdc.MustMarshalBinaryBare(reqBlockMsg) | |||
bcr.Receive(chID, peer, reqBlockBytes) | |||
msg := peer.lastBlockchainMessage() | |||
if tt.existent { | |||
if blockMsg, ok := msg.(*bcBlockResponseMessage); !ok { | |||
t.Fatalf("Expected to receive a block response for height %d", tt.height) | |||
} else if blockMsg.Block.Height != tt.height { | |||
t.Fatalf("Expected response to be for height %d, got %d", tt.height, blockMsg.Block.Height) | |||
} | |||
} else { | |||
if noBlockMsg, ok := msg.(*bcNoBlockResponseMessage); !ok { | |||
t.Fatalf("Expected to receive a no block response for height %d", tt.height) | |||
} else if noBlockMsg.Height != tt.height { | |||
t.Fatalf("Expected response to be for height %d, got %d", tt.height, noBlockMsg.Height) | |||
} | |||
} | |||
} | |||
} | |||
/* | |||
// NOTE: This is too hard to test without | |||
// an easy way to add test peer to switch | |||
// or without significant refactoring of the module. | |||
// Alternatively we could actually dial a TCP conn but | |||
// that seems extreme. | |||
func TestBadBlockStopsPeer(t *testing.T) { | |||
maxBlockHeight := int64(20) | |||
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight) | |||
bcr.Start() | |||
defer bcr.Stop() | |||
// Add some peers in | |||
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12))) | |||
// XXX: This doesn't add the peer to anything, | |||
// so it's hard to check that it's later removed | |||
bcr.AddPeer(peer) | |||
assert.True(t, bcr.Switch.Peers().Size() > 0) | |||
// send a bad block from the peer | |||
// default blocks already dont have commits, so should fail | |||
block := bcr.store.LoadBlock(3) | |||
msg := &bcBlockResponseMessage{Block: block} | |||
peer.Send(BlockchainChannel, struct{ BlockchainMessage }{msg}) | |||
ticker := time.NewTicker(time.Millisecond * 10) | |||
timer := time.NewTimer(time.Second * 2) | |||
LOOP: | |||
for { | |||
select { | |||
case <-ticker.C: | |||
if bcr.Switch.Peers().Size() == 0 { | |||
break LOOP | |||
} | |||
case <-timer.C: | |||
t.Fatal("Timed out waiting to disconnect peer") | |||
} | |||
} | |||
} | |||
*/ | |||
//---------------------------------------------- | |||
// utility funcs | |||
func makeTxs(height int64) (txs []types.Tx) { | |||
for i := 0; i < 10; i++ { | |||
txs = append(txs, types.Tx([]byte{byte(height), byte(i)})) | |||
} | |||
return txs | |||
} | |||
func makeBlock(height int64, state sm.State) *types.Block { | |||
block, _ := state.MakeBlock(height, makeTxs(height), new(types.Commit)) | |||
return block | |||
} | |||
// The Test peer | |||
type bcrTestPeer struct { | |||
cmn.BaseService | |||
id p2p.ID | |||
ch chan interface{} | |||
} | |||
var _ p2p.Peer = (*bcrTestPeer)(nil) | |||
func newbcrTestPeer(id p2p.ID) *bcrTestPeer { | |||
bcr := &bcrTestPeer{ | |||
id: id, | |||
ch: make(chan interface{}, 2), | |||
} | |||
bcr.BaseService = *cmn.NewBaseService(nil, "bcrTestPeer", bcr) | |||
return bcr | |||
} | |||
func (tp *bcrTestPeer) lastBlockchainMessage() interface{} { return <-tp.ch } | |||
func (tp *bcrTestPeer) TrySend(chID byte, msgBytes []byte) bool { | |||
var msg BlockchainMessage | |||
err := cdc.UnmarshalBinaryBare(msgBytes, &msg) | |||
if err != nil { | |||
panic(cmn.ErrorWrap(err, "Error while trying to parse a BlockchainMessage")) | |||
} | |||
if _, ok := msg.(*bcStatusResponseMessage); ok { | |||
// Discard status response messages since they skew our results | |||
// We only want to deal with: | |||
// + bcBlockResponseMessage | |||
// + bcNoBlockResponseMessage | |||
} else { | |||
tp.ch <- msg | |||
} | |||
return true | |||
} | |||
func (tp *bcrTestPeer) Send(chID byte, msgBytes []byte) bool { return tp.TrySend(chID, msgBytes) } | |||
func (tp *bcrTestPeer) NodeInfo() p2p.NodeInfo { return p2p.NodeInfo{} } | |||
func (tp *bcrTestPeer) Status() p2p.ConnectionStatus { return p2p.ConnectionStatus{} } | |||
func (tp *bcrTestPeer) ID() p2p.ID { return tp.id } | |||
func (tp *bcrTestPeer) IsOutbound() bool { return false } | |||
func (tp *bcrTestPeer) IsPersistent() bool { return true } | |||
func (tp *bcrTestPeer) Get(s string) interface{} { return s } | |||
func (tp *bcrTestPeer) Set(string, interface{}) {} | |||
func (tp *bcrTestPeer) RemoteIP() net.IP { return []byte{127, 0, 0, 1} } |
@ -1,247 +0,0 @@ | |||
package blockchain | |||
import ( | |||
"fmt" | |||
"sync" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
dbm "github.com/tendermint/tmlibs/db" | |||
"github.com/tendermint/tendermint/types" | |||
) | |||
/* | |||
BlockStore is a simple low level store for blocks. | |||
There are three types of information stored: | |||
- BlockMeta: Meta information about each block | |||
- Block part: Parts of each block, aggregated w/ PartSet | |||
- Commit: The commit part of each block, for gossiping precommit votes | |||
Currently the precommit signatures are duplicated in the Block parts as | |||
well as the Commit. In the future this may change, perhaps by moving | |||
the Commit data outside the Block. (TODO) | |||
// NOTE: BlockStore methods will panic if they encounter errors | |||
// deserializing loaded data, indicating probable corruption on disk. | |||
*/ | |||
type BlockStore struct { | |||
db dbm.DB | |||
mtx sync.RWMutex | |||
height int64 | |||
} | |||
// NewBlockStore returns a new BlockStore with the given DB, | |||
// initialized to the last height that was committed to the DB. | |||
func NewBlockStore(db dbm.DB) *BlockStore { | |||
bsjson := LoadBlockStoreStateJSON(db) | |||
return &BlockStore{ | |||
height: bsjson.Height, | |||
db: db, | |||
} | |||
} | |||
// Height returns the last known contiguous block height. | |||
func (bs *BlockStore) Height() int64 { | |||
bs.mtx.RLock() | |||
defer bs.mtx.RUnlock() | |||
return bs.height | |||
} | |||
// LoadBlock returns the block with the given height. | |||
// If no block is found for that height, it returns nil. | |||
func (bs *BlockStore) LoadBlock(height int64) *types.Block { | |||
var blockMeta = bs.LoadBlockMeta(height) | |||
if blockMeta == nil { | |||
return nil | |||
} | |||
var block = new(types.Block) | |||
buf := []byte{} | |||
for i := 0; i < blockMeta.BlockID.PartsHeader.Total; i++ { | |||
part := bs.LoadBlockPart(height, i) | |||
buf = append(buf, part.Bytes...) | |||
} | |||
err := cdc.UnmarshalBinary(buf, block) | |||
if err != nil { | |||
// NOTE: The existence of meta should imply the existence of the | |||
// block. So, make sure meta is only saved after blocks are saved. | |||
panic(cmn.ErrorWrap(err, "Error reading block")) | |||
} | |||
return block | |||
} | |||
// LoadBlockPart returns the Part at the given index | |||
// from the block at the given height. | |||
// If no part is found for the given height and index, it returns nil. | |||
func (bs *BlockStore) LoadBlockPart(height int64, index int) *types.Part { | |||
var part = new(types.Part) | |||
bz := bs.db.Get(calcBlockPartKey(height, index)) | |||
if len(bz) == 0 { | |||
return nil | |||
} | |||
err := cdc.UnmarshalBinaryBare(bz, part) | |||
if err != nil { | |||
panic(cmn.ErrorWrap(err, "Error reading block part")) | |||
} | |||
return part | |||
} | |||
// LoadBlockMeta returns the BlockMeta for the given height. | |||
// If no block is found for the given height, it returns nil. | |||
func (bs *BlockStore) LoadBlockMeta(height int64) *types.BlockMeta { | |||
var blockMeta = new(types.BlockMeta) | |||
bz := bs.db.Get(calcBlockMetaKey(height)) | |||
if len(bz) == 0 { | |||
return nil | |||
} | |||
err := cdc.UnmarshalBinaryBare(bz, blockMeta) | |||
if err != nil { | |||
panic(cmn.ErrorWrap(err, "Error reading block meta")) | |||
} | |||
return blockMeta | |||
} | |||
// LoadBlockCommit returns the Commit for the given height. | |||
// This commit consists of the +2/3 and other Precommit-votes for block at `height`, | |||
// and it comes from the block.LastCommit for `height+1`. | |||
// If no commit is found for the given height, it returns nil. | |||
func (bs *BlockStore) LoadBlockCommit(height int64) *types.Commit { | |||
var commit = new(types.Commit) | |||
bz := bs.db.Get(calcBlockCommitKey(height)) | |||
if len(bz) == 0 { | |||
return nil | |||
} | |||
err := cdc.UnmarshalBinaryBare(bz, commit) | |||
if err != nil { | |||
panic(cmn.ErrorWrap(err, "Error reading block commit")) | |||
} | |||
return commit | |||
} | |||
// LoadSeenCommit returns the locally seen Commit for the given height. | |||
// This is useful when we've seen a commit, but there has not yet been | |||
// a new block at `height + 1` that includes this commit in its block.LastCommit. | |||
func (bs *BlockStore) LoadSeenCommit(height int64) *types.Commit { | |||
var commit = new(types.Commit) | |||
bz := bs.db.Get(calcSeenCommitKey(height)) | |||
if len(bz) == 0 { | |||
return nil | |||
} | |||
err := cdc.UnmarshalBinaryBare(bz, commit) | |||
if err != nil { | |||
panic(cmn.ErrorWrap(err, "Error reading block seen commit")) | |||
} | |||
return commit | |||
} | |||
// SaveBlock persists the given block, blockParts, and seenCommit to the underlying db. | |||
// blockParts: Must be parts of the block | |||
// seenCommit: The +2/3 precommits that were seen which committed at height. | |||
// If all the nodes restart after committing a block, | |||
// we need this to reload the precommits to catch-up nodes to the | |||
// most recent height. Otherwise they'd stall at H-1. | |||
func (bs *BlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) { | |||
if block == nil { | |||
cmn.PanicSanity("BlockStore can only save a non-nil block") | |||
} | |||
height := block.Height | |||
if g, w := height, bs.Height()+1; g != w { | |||
cmn.PanicSanity(cmn.Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", w, g)) | |||
} | |||
if !blockParts.IsComplete() { | |||
cmn.PanicSanity(cmn.Fmt("BlockStore can only save complete block part sets")) | |||
} | |||
// Save block meta | |||
blockMeta := types.NewBlockMeta(block, blockParts) | |||
metaBytes := cdc.MustMarshalBinaryBare(blockMeta) | |||
bs.db.Set(calcBlockMetaKey(height), metaBytes) | |||
// Save block parts | |||
for i := 0; i < blockParts.Total(); i++ { | |||
part := blockParts.GetPart(i) | |||
bs.saveBlockPart(height, i, part) | |||
} | |||
// Save block commit (duplicate and separate from the Block) | |||
blockCommitBytes := cdc.MustMarshalBinaryBare(block.LastCommit) | |||
bs.db.Set(calcBlockCommitKey(height-1), blockCommitBytes) | |||
// Save seen commit (seen +2/3 precommits for block) | |||
// NOTE: we can delete this at a later height | |||
seenCommitBytes := cdc.MustMarshalBinaryBare(seenCommit) | |||
bs.db.Set(calcSeenCommitKey(height), seenCommitBytes) | |||
// Save new BlockStoreStateJSON descriptor | |||
BlockStoreStateJSON{Height: height}.Save(bs.db) | |||
// Done! | |||
bs.mtx.Lock() | |||
bs.height = height | |||
bs.mtx.Unlock() | |||
// Flush | |||
bs.db.SetSync(nil, nil) | |||
} | |||
func (bs *BlockStore) saveBlockPart(height int64, index int, part *types.Part) { | |||
if height != bs.Height()+1 { | |||
cmn.PanicSanity(cmn.Fmt("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height)) | |||
} | |||
partBytes := cdc.MustMarshalBinaryBare(part) | |||
bs.db.Set(calcBlockPartKey(height, index), partBytes) | |||
} | |||
//----------------------------------------------------------------------------- | |||
func calcBlockMetaKey(height int64) []byte { | |||
return []byte(fmt.Sprintf("H:%v", height)) | |||
} | |||
func calcBlockPartKey(height int64, partIndex int) []byte { | |||
return []byte(fmt.Sprintf("P:%v:%v", height, partIndex)) | |||
} | |||
func calcBlockCommitKey(height int64) []byte { | |||
return []byte(fmt.Sprintf("C:%v", height)) | |||
} | |||
func calcSeenCommitKey(height int64) []byte { | |||
return []byte(fmt.Sprintf("SC:%v", height)) | |||
} | |||
//----------------------------------------------------------------------------- | |||
var blockStoreKey = []byte("blockStore") | |||
type BlockStoreStateJSON struct { | |||
Height int64 `json:"height"` | |||
} | |||
// Save persists the blockStore state to the database as JSON. | |||
func (bsj BlockStoreStateJSON) Save(db dbm.DB) { | |||
bytes, err := cdc.MarshalJSON(bsj) | |||
if err != nil { | |||
cmn.PanicSanity(cmn.Fmt("Could not marshal state bytes: %v", err)) | |||
} | |||
db.SetSync(blockStoreKey, bytes) | |||
} | |||
// LoadBlockStoreStateJSON returns the BlockStoreStateJSON as loaded from disk. | |||
// If no BlockStoreStateJSON was previously persisted, it returns the zero value. | |||
func LoadBlockStoreStateJSON(db dbm.DB) BlockStoreStateJSON { | |||
bytes := db.Get(blockStoreKey) | |||
if len(bytes) == 0 { | |||
return BlockStoreStateJSON{ | |||
Height: 0, | |||
} | |||
} | |||
bsj := BlockStoreStateJSON{} | |||
err := cdc.UnmarshalJSON(bytes, &bsj) | |||
if err != nil { | |||
panic(fmt.Sprintf("Could not unmarshal bytes: %X", bytes)) | |||
} | |||
return bsj | |||
} |
@ -1,383 +0,0 @@ | |||
package blockchain | |||
import ( | |||
"bytes" | |||
"fmt" | |||
"runtime/debug" | |||
"strings" | |||
"testing" | |||
"time" | |||
"github.com/stretchr/testify/assert" | |||
"github.com/stretchr/testify/require" | |||
"github.com/tendermint/tmlibs/db" | |||
"github.com/tendermint/tmlibs/log" | |||
"github.com/tendermint/tendermint/types" | |||
) | |||
func TestLoadBlockStoreStateJSON(t *testing.T) { | |||
db := db.NewMemDB() | |||
bsj := &BlockStoreStateJSON{Height: 1000} | |||
bsj.Save(db) | |||
retrBSJ := LoadBlockStoreStateJSON(db) | |||
assert.Equal(t, *bsj, retrBSJ, "expected the retrieved DBs to match") | |||
} | |||
func TestNewBlockStore(t *testing.T) { | |||
db := db.NewMemDB() | |||
db.Set(blockStoreKey, []byte(`{"height": 10000}`)) | |||
bs := NewBlockStore(db) | |||
require.Equal(t, int64(10000), bs.Height(), "failed to properly parse blockstore") | |||
panicCausers := []struct { | |||
data []byte | |||
wantErr string | |||
}{ | |||
{[]byte("artful-doger"), "not unmarshal bytes"}, | |||
{[]byte(" "), "unmarshal bytes"}, | |||
} | |||
for i, tt := range panicCausers { | |||
// Expecting a panic here on trying to parse an invalid blockStore | |||
_, _, panicErr := doFn(func() (interface{}, error) { | |||
db.Set(blockStoreKey, tt.data) | |||
_ = NewBlockStore(db) | |||
return nil, nil | |||
}) | |||
require.NotNil(t, panicErr, "#%d panicCauser: %q expected a panic", i, tt.data) | |||
assert.Contains(t, panicErr.Error(), tt.wantErr, "#%d data: %q", i, tt.data) | |||
} | |||
db.Set(blockStoreKey, nil) | |||
bs = NewBlockStore(db) | |||
assert.Equal(t, bs.Height(), int64(0), "expecting nil bytes to be unmarshaled alright") | |||
} | |||
func freshBlockStore() (*BlockStore, db.DB) { | |||
db := db.NewMemDB() | |||
return NewBlockStore(db), db | |||
} | |||
var ( | |||
state, _ = makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer))) | |||
block = makeBlock(1, state) | |||
partSet = block.MakePartSet(2) | |||
part1 = partSet.GetPart(0) | |||
part2 = partSet.GetPart(1) | |||
seenCommit1 = &types.Commit{Precommits: []*types.Vote{{Height: 10, | |||
Timestamp: time.Now().UTC()}}} | |||
) | |||
// TODO: This test should be simplified ... | |||
func TestBlockStoreSaveLoadBlock(t *testing.T) { | |||
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer))) | |||
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero") | |||
// check there are no blocks at various heights | |||
noBlockHeights := []int64{0, -1, 100, 1000, 2} | |||
for i, height := range noBlockHeights { | |||
if g := bs.LoadBlock(height); g != nil { | |||
t.Errorf("#%d: height(%d) got a block; want nil", i, height) | |||
} | |||
} | |||
// save a block | |||
block := makeBlock(bs.Height()+1, state) | |||
validPartSet := block.MakePartSet(2) | |||
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10, | |||
Timestamp: time.Now().UTC()}}} | |||
bs.SaveBlock(block, partSet, seenCommit) | |||
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed") | |||
incompletePartSet := types.NewPartSetFromHeader(types.PartSetHeader{Total: 2}) | |||
uncontiguousPartSet := types.NewPartSetFromHeader(types.PartSetHeader{Total: 0}) | |||
uncontiguousPartSet.AddPart(part2) | |||
header1 := types.Header{ | |||
Height: 1, | |||
NumTxs: 100, | |||
ChainID: "block_test", | |||
Time: time.Now(), | |||
} | |||
header2 := header1 | |||
header2.Height = 4 | |||
// End of setup, test data | |||
commitAtH10 := &types.Commit{Precommits: []*types.Vote{{Height: 10, | |||
Timestamp: time.Now().UTC()}}} | |||
tuples := []struct { | |||
block *types.Block | |||
parts *types.PartSet | |||
seenCommit *types.Commit | |||
wantErr bool | |||
wantPanic string | |||
corruptBlockInDB bool | |||
corruptCommitInDB bool | |||
corruptSeenCommitInDB bool | |||
eraseCommitInDB bool | |||
eraseSeenCommitInDB bool | |||
}{ | |||
{ | |||
block: newBlock(&header1, commitAtH10), | |||
parts: validPartSet, | |||
seenCommit: seenCommit1, | |||
}, | |||
{ | |||
block: nil, | |||
wantPanic: "only save a non-nil block", | |||
}, | |||
{ | |||
block: newBlock(&header2, commitAtH10), | |||
parts: uncontiguousPartSet, | |||
wantPanic: "only save contiguous blocks", // and incomplete and uncontiguous parts | |||
}, | |||
{ | |||
block: newBlock(&header1, commitAtH10), | |||
parts: incompletePartSet, | |||
wantPanic: "only save complete block", // incomplete parts | |||
}, | |||
{ | |||
block: newBlock(&header1, commitAtH10), | |||
parts: validPartSet, | |||
seenCommit: seenCommit1, | |||
corruptCommitInDB: true, // Corrupt the DB's commit entry | |||
wantPanic: "Error reading block commit", | |||
}, | |||
{ | |||
block: newBlock(&header1, commitAtH10), | |||
parts: validPartSet, | |||
seenCommit: seenCommit1, | |||
wantPanic: "Error reading block", | |||
corruptBlockInDB: true, // Corrupt the DB's block entry | |||
}, | |||
{ | |||
block: newBlock(&header1, commitAtH10), | |||
parts: validPartSet, | |||
seenCommit: seenCommit1, | |||
// Expecting no error and we want a nil back | |||
eraseSeenCommitInDB: true, | |||
}, | |||
{ | |||
block: newBlock(&header1, commitAtH10), | |||
parts: validPartSet, | |||
seenCommit: seenCommit1, | |||
corruptSeenCommitInDB: true, | |||
wantPanic: "Error reading block seen commit", | |||
}, | |||
{ | |||
block: newBlock(&header1, commitAtH10), | |||
parts: validPartSet, | |||
seenCommit: seenCommit1, | |||
// Expecting no error and we want a nil back | |||
eraseCommitInDB: true, | |||
}, | |||
} | |||
type quad struct { | |||
block *types.Block | |||
commit *types.Commit | |||
meta *types.BlockMeta | |||
seenCommit *types.Commit | |||
} | |||
for i, tuple := range tuples { | |||
bs, db := freshBlockStore() | |||
// SaveBlock | |||
res, err, panicErr := doFn(func() (interface{}, error) { | |||
bs.SaveBlock(tuple.block, tuple.parts, tuple.seenCommit) | |||
if tuple.block == nil { | |||
return nil, nil | |||
} | |||
if tuple.corruptBlockInDB { | |||
db.Set(calcBlockMetaKey(tuple.block.Height), []byte("block-bogus")) | |||
} | |||
bBlock := bs.LoadBlock(tuple.block.Height) | |||
bBlockMeta := bs.LoadBlockMeta(tuple.block.Height) | |||
if tuple.eraseSeenCommitInDB { | |||
db.Delete(calcSeenCommitKey(tuple.block.Height)) | |||
} | |||
if tuple.corruptSeenCommitInDB { | |||
db.Set(calcSeenCommitKey(tuple.block.Height), []byte("bogus-seen-commit")) | |||
} | |||
bSeenCommit := bs.LoadSeenCommit(tuple.block.Height) | |||
commitHeight := tuple.block.Height - 1 | |||
if tuple.eraseCommitInDB { | |||
db.Delete(calcBlockCommitKey(commitHeight)) | |||
} | |||
if tuple.corruptCommitInDB { | |||
db.Set(calcBlockCommitKey(commitHeight), []byte("foo-bogus")) | |||
} | |||
bCommit := bs.LoadBlockCommit(commitHeight) | |||
return &quad{block: bBlock, seenCommit: bSeenCommit, commit: bCommit, | |||
meta: bBlockMeta}, nil | |||
}) | |||
if subStr := tuple.wantPanic; subStr != "" { | |||
if panicErr == nil { | |||
t.Errorf("#%d: want a non-nil panic", i) | |||
} else if got := panicErr.Error(); !strings.Contains(got, subStr) { | |||
t.Errorf("#%d:\n\tgotErr: %q\nwant substring: %q", i, got, subStr) | |||
} | |||
continue | |||
} | |||
if tuple.wantErr { | |||
if err == nil { | |||
t.Errorf("#%d: got nil error", i) | |||
} | |||
continue | |||
} | |||
assert.Nil(t, panicErr, "#%d: unexpected panic", i) | |||
assert.Nil(t, err, "#%d: expecting a non-nil error", i) | |||
qua, ok := res.(*quad) | |||
if !ok || qua == nil { | |||
t.Errorf("#%d: got nil quad back; gotType=%T", i, res) | |||
continue | |||
} | |||
if tuple.eraseSeenCommitInDB { | |||
assert.Nil(t, qua.seenCommit, | |||
"erased the seenCommit in the DB hence we should get back a nil seenCommit") | |||
} | |||
if tuple.eraseCommitInDB { | |||
assert.Nil(t, qua.commit, | |||
"erased the commit in the DB hence we should get back a nil commit") | |||
} | |||
} | |||
} | |||
func TestLoadBlockPart(t *testing.T) { | |||
bs, db := freshBlockStore() | |||
height, index := int64(10), 1 | |||
loadPart := func() (interface{}, error) { | |||
part := bs.LoadBlockPart(height, index) | |||
return part, nil | |||
} | |||
// Initially no contents. | |||
// 1. Requesting for a non-existent block shouldn't fail | |||
res, _, panicErr := doFn(loadPart) | |||
require.Nil(t, panicErr, "a non-existent block part shouldn't cause a panic") | |||
require.Nil(t, res, "a non-existent block part should return nil") | |||
// 2. Next save a corrupted block then try to load it | |||
db.Set(calcBlockPartKey(height, index), []byte("Tendermint")) | |||
res, _, panicErr = doFn(loadPart) | |||
require.NotNil(t, panicErr, "expecting a non-nil panic") | |||
require.Contains(t, panicErr.Error(), "Error reading block part") | |||
// 3. A good block serialized and saved to the DB should be retrievable | |||
db.Set(calcBlockPartKey(height, index), cdc.MustMarshalBinaryBare(part1)) | |||
gotPart, _, panicErr := doFn(loadPart) | |||
require.Nil(t, panicErr, "an existent and proper block should not panic") | |||
require.Nil(t, res, "a properly saved block should return a proper block") | |||
require.Equal(t, gotPart.(*types.Part).Hash(), part1.Hash(), | |||
"expecting successful retrieval of previously saved block") | |||
} | |||
func TestLoadBlockMeta(t *testing.T) { | |||
bs, db := freshBlockStore() | |||
height := int64(10) | |||
loadMeta := func() (interface{}, error) { | |||
meta := bs.LoadBlockMeta(height) | |||
return meta, nil | |||
} | |||
// Initially no contents. | |||
// 1. Requesting for a non-existent blockMeta shouldn't fail | |||
res, _, panicErr := doFn(loadMeta) | |||
require.Nil(t, panicErr, "a non-existent blockMeta shouldn't cause a panic") | |||
require.Nil(t, res, "a non-existent blockMeta should return nil") | |||
// 2. Next save a corrupted blockMeta then try to load it | |||
db.Set(calcBlockMetaKey(height), []byte("Tendermint-Meta")) | |||
res, _, panicErr = doFn(loadMeta) | |||
require.NotNil(t, panicErr, "expecting a non-nil panic") | |||
require.Contains(t, panicErr.Error(), "Error reading block meta") | |||
// 3. A good blockMeta serialized and saved to the DB should be retrievable | |||
meta := &types.BlockMeta{} | |||
db.Set(calcBlockMetaKey(height), cdc.MustMarshalBinaryBare(meta)) | |||
gotMeta, _, panicErr := doFn(loadMeta) | |||
require.Nil(t, panicErr, "an existent and proper block should not panic") | |||
require.Nil(t, res, "a properly saved blockMeta should return a proper blocMeta ") | |||
require.Equal(t, cdc.MustMarshalBinaryBare(meta), cdc.MustMarshalBinaryBare(gotMeta), | |||
"expecting successful retrieval of previously saved blockMeta") | |||
} | |||
func TestBlockFetchAtHeight(t *testing.T) { | |||
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer))) | |||
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero") | |||
block := makeBlock(bs.Height()+1, state) | |||
partSet := block.MakePartSet(2) | |||
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10, | |||
Timestamp: time.Now().UTC()}}} | |||
bs.SaveBlock(block, partSet, seenCommit) | |||
require.Equal(t, bs.Height(), block.Header.Height, "expecting the new height to be changed") | |||
blockAtHeight := bs.LoadBlock(bs.Height()) | |||
bz1 := cdc.MustMarshalBinaryBare(block) | |||
bz2 := cdc.MustMarshalBinaryBare(blockAtHeight) | |||
require.Equal(t, bz1, bz2) | |||
require.Equal(t, block.Hash(), blockAtHeight.Hash(), | |||
"expecting a successful load of the last saved block") | |||
blockAtHeightPlus1 := bs.LoadBlock(bs.Height() + 1) | |||
require.Nil(t, blockAtHeightPlus1, "expecting an unsuccessful load of Height()+1") | |||
blockAtHeightPlus2 := bs.LoadBlock(bs.Height() + 2) | |||
require.Nil(t, blockAtHeightPlus2, "expecting an unsuccessful load of Height()+2") | |||
} | |||
func doFn(fn func() (interface{}, error)) (res interface{}, err error, panicErr error) { | |||
defer func() { | |||
if r := recover(); r != nil { | |||
switch e := r.(type) { | |||
case error: | |||
panicErr = e | |||
case string: | |||
panicErr = fmt.Errorf("%s", e) | |||
default: | |||
if st, ok := r.(fmt.Stringer); ok { | |||
panicErr = fmt.Errorf("%s", st) | |||
} else { | |||
panicErr = fmt.Errorf("%s", debug.Stack()) | |||
} | |||
} | |||
} | |||
}() | |||
res, err = fn() | |||
return res, err, panicErr | |||
} | |||
func newBlock(hdr *types.Header, lastCommit *types.Commit) *types.Block { | |||
return &types.Block{ | |||
Header: hdr, | |||
LastCommit: lastCommit, | |||
} | |||
} |
@ -1,13 +0,0 @@ | |||
package blockchain | |||
import ( | |||
"github.com/tendermint/go-amino" | |||
"github.com/tendermint/go-crypto" | |||
) | |||
var cdc = amino.NewCodec() | |||
func init() { | |||
RegisterBlockchainMessages(cdc) | |||
crypto.RegisterAmino(cdc) | |||
} |
@ -1,53 +0,0 @@ | |||
package main | |||
import ( | |||
"flag" | |||
"os" | |||
crypto "github.com/tendermint/go-crypto" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
"github.com/tendermint/tmlibs/log" | |||
"github.com/tendermint/tendermint/privval" | |||
) | |||
func main() { | |||
var ( | |||
addr = flag.String("addr", ":26659", "Address of client to connect to") | |||
chainID = flag.String("chain-id", "mychain", "chain id") | |||
privValPath = flag.String("priv", "", "priv val file path") | |||
logger = log.NewTMLogger( | |||
log.NewSyncWriter(os.Stdout), | |||
).With("module", "priv_val") | |||
) | |||
flag.Parse() | |||
logger.Info( | |||
"Starting private validator", | |||
"addr", *addr, | |||
"chainID", *chainID, | |||
"privPath", *privValPath, | |||
) | |||
pv := privval.LoadFilePV(*privValPath) | |||
rs := privval.NewRemoteSigner( | |||
logger, | |||
*chainID, | |||
*addr, | |||
pv, | |||
crypto.GenPrivKeyEd25519(), | |||
) | |||
err := rs.Start() | |||
if err != nil { | |||
panic(err) | |||
} | |||
cmn.TrapSignal(func() { | |||
err := rs.Stop() | |||
if err != nil { | |||
panic(err) | |||
} | |||
}) | |||
} |
@ -1,32 +0,0 @@ | |||
package commands | |||
import ( | |||
"fmt" | |||
"github.com/spf13/cobra" | |||
"github.com/tendermint/tendermint/p2p" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
// GenNodeKeyCmd allows the generation of a node key. It prints node's ID to | |||
// the standard output. | |||
var GenNodeKeyCmd = &cobra.Command{ | |||
Use: "gen_node_key", | |||
Short: "Generate a node key for this node and print its ID", | |||
RunE: genNodeKey, | |||
} | |||
func genNodeKey(cmd *cobra.Command, args []string) error { | |||
nodeKeyFile := config.NodeKeyFile() | |||
if cmn.FileExists(nodeKeyFile) { | |||
return fmt.Errorf("node key at %s already exists", nodeKeyFile) | |||
} | |||
nodeKey, err := p2p.LoadOrGenNodeKey(nodeKeyFile) | |||
if err != nil { | |||
return err | |||
} | |||
fmt.Println(nodeKey.ID()) | |||
return nil | |||
} |
@ -1,27 +0,0 @@ | |||
package commands | |||
import ( | |||
"fmt" | |||
"github.com/spf13/cobra" | |||
"github.com/tendermint/tendermint/privval" | |||
) | |||
// GenValidatorCmd allows the generation of a keypair for a | |||
// validator. | |||
var GenValidatorCmd = &cobra.Command{ | |||
Use: "gen_validator", | |||
Short: "Generate new validator keypair", | |||
Run: genValidator, | |||
} | |||
func genValidator(cmd *cobra.Command, args []string) { | |||
pv := privval.GenFilePV("") | |||
jsbz, err := cdc.MarshalJSON(pv) | |||
if err != nil { | |||
panic(err) | |||
} | |||
fmt.Printf(`%v | |||
`, string(jsbz)) | |||
} |
@ -1,70 +0,0 @@ | |||
package commands | |||
import ( | |||
"time" | |||
"github.com/spf13/cobra" | |||
cfg "github.com/tendermint/tendermint/config" | |||
"github.com/tendermint/tendermint/p2p" | |||
"github.com/tendermint/tendermint/privval" | |||
"github.com/tendermint/tendermint/types" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
// InitFilesCmd initialises a fresh Tendermint Core instance. | |||
var InitFilesCmd = &cobra.Command{ | |||
Use: "init", | |||
Short: "Initialize Tendermint", | |||
RunE: initFiles, | |||
} | |||
func initFiles(cmd *cobra.Command, args []string) error { | |||
return initFilesWithConfig(config) | |||
} | |||
func initFilesWithConfig(config *cfg.Config) error { | |||
// private validator | |||
privValFile := config.PrivValidatorFile() | |||
var pv *privval.FilePV | |||
if cmn.FileExists(privValFile) { | |||
pv = privval.LoadFilePV(privValFile) | |||
logger.Info("Found private validator", "path", privValFile) | |||
} else { | |||
pv = privval.GenFilePV(privValFile) | |||
pv.Save() | |||
logger.Info("Generated private validator", "path", privValFile) | |||
} | |||
nodeKeyFile := config.NodeKeyFile() | |||
if cmn.FileExists(nodeKeyFile) { | |||
logger.Info("Found node key", "path", nodeKeyFile) | |||
} else { | |||
if _, err := p2p.LoadOrGenNodeKey(nodeKeyFile); err != nil { | |||
return err | |||
} | |||
logger.Info("Generated node key", "path", nodeKeyFile) | |||
} | |||
// genesis file | |||
genFile := config.GenesisFile() | |||
if cmn.FileExists(genFile) { | |||
logger.Info("Found genesis file", "path", genFile) | |||
} else { | |||
genDoc := types.GenesisDoc{ | |||
ChainID: cmn.Fmt("test-chain-%v", cmn.RandStr(6)), | |||
GenesisTime: time.Now(), | |||
} | |||
genDoc.Validators = []types.GenesisValidator{{ | |||
PubKey: pv.GetPubKey(), | |||
Power: 10, | |||
}} | |||
if err := genDoc.SaveAs(genFile); err != nil { | |||
return err | |||
} | |||
logger.Info("Generated genesis file", "path", genFile) | |||
} | |||
return nil | |||
} |
@ -1,87 +0,0 @@ | |||
package commands | |||
import ( | |||
"fmt" | |||
"net/url" | |||
"github.com/spf13/cobra" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
"github.com/tendermint/tendermint/lite/proxy" | |||
rpcclient "github.com/tendermint/tendermint/rpc/client" | |||
) | |||
// LiteCmd represents the base command when called without any subcommands | |||
var LiteCmd = &cobra.Command{ | |||
Use: "lite", | |||
Short: "Run lite-client proxy server, verifying tendermint rpc", | |||
Long: `This node will run a secure proxy to a tendermint rpc server. | |||
All calls that can be tracked back to a block header by a proof | |||
will be verified before passing them back to the caller. Other that | |||
that it will present the same interface as a full tendermint node, | |||
just with added trust and running locally.`, | |||
RunE: runProxy, | |||
SilenceUsage: true, | |||
} | |||
var ( | |||
listenAddr string | |||
nodeAddr string | |||
chainID string | |||
home string | |||
) | |||
func init() { | |||
LiteCmd.Flags().StringVar(&listenAddr, "laddr", "tcp://localhost:8888", "Serve the proxy on the given address") | |||
LiteCmd.Flags().StringVar(&nodeAddr, "node", "tcp://localhost:26657", "Connect to a Tendermint node at this address") | |||
LiteCmd.Flags().StringVar(&chainID, "chain-id", "tendermint", "Specify the Tendermint chain ID") | |||
LiteCmd.Flags().StringVar(&home, "home-dir", ".tendermint-lite", "Specify the home directory") | |||
} | |||
func ensureAddrHasSchemeOrDefaultToTCP(addr string) (string, error) { | |||
u, err := url.Parse(addr) | |||
if err != nil { | |||
return "", err | |||
} | |||
switch u.Scheme { | |||
case "tcp", "unix": | |||
case "": | |||
u.Scheme = "tcp" | |||
default: | |||
return "", fmt.Errorf("unknown scheme %q, use either tcp or unix", u.Scheme) | |||
} | |||
return u.String(), nil | |||
} | |||
func runProxy(cmd *cobra.Command, args []string) error { | |||
nodeAddr, err := ensureAddrHasSchemeOrDefaultToTCP(nodeAddr) | |||
if err != nil { | |||
return err | |||
} | |||
listenAddr, err := ensureAddrHasSchemeOrDefaultToTCP(listenAddr) | |||
if err != nil { | |||
return err | |||
} | |||
// First, connect a client | |||
node := rpcclient.NewHTTP(nodeAddr, "/websocket") | |||
cert, err := proxy.GetCertifier(chainID, home, nodeAddr) | |||
if err != nil { | |||
return err | |||
} | |||
sc := proxy.SecureClient(node, cert) | |||
err = proxy.StartProxy(sc, listenAddr, logger) | |||
if err != nil { | |||
return err | |||
} | |||
cmn.TrapSignal(func() { | |||
// TODO: close up shop | |||
}) | |||
return nil | |||
} |
@ -1,31 +0,0 @@ | |||
package commands | |||
import ( | |||
"fmt" | |||
"github.com/spf13/cobra" | |||
"github.com/tendermint/tendermint/p2p/upnp" | |||
) | |||
// ProbeUpnpCmd adds capabilities to test the UPnP functionality. | |||
var ProbeUpnpCmd = &cobra.Command{ | |||
Use: "probe_upnp", | |||
Short: "Test UPnP functionality", | |||
RunE: probeUpnp, | |||
} | |||
func probeUpnp(cmd *cobra.Command, args []string) error { | |||
capabilities, err := upnp.Probe(logger) | |||
if err != nil { | |||
fmt.Println("Probe failed: ", err) | |||
} else { | |||
fmt.Println("Probe success!") | |||
jsonBytes, err := cdc.MarshalJSON(capabilities) | |||
if err != nil { | |||
return err | |||
} | |||
fmt.Println(string(jsonBytes)) | |||
} | |||
return nil | |||
} |
@ -1,26 +0,0 @@ | |||
package commands | |||
import ( | |||
"github.com/spf13/cobra" | |||
"github.com/tendermint/tendermint/consensus" | |||
) | |||
// ReplayCmd allows replaying of messages from the WAL. | |||
var ReplayCmd = &cobra.Command{ | |||
Use: "replay", | |||
Short: "Replay messages from WAL", | |||
Run: func(cmd *cobra.Command, args []string) { | |||
consensus.RunReplayFile(config.BaseConfig, config.Consensus, false) | |||
}, | |||
} | |||
// ReplayConsoleCmd allows replaying of messages from the WAL in a | |||
// console. | |||
var ReplayConsoleCmd = &cobra.Command{ | |||
Use: "replay_console", | |||
Short: "Replay messages from WAL in a console", | |||
Run: func(cmd *cobra.Command, args []string) { | |||
consensus.RunReplayFile(config.BaseConfig, config.Consensus, true) | |||
}, | |||
} |
@ -1,69 +0,0 @@ | |||
package commands | |||
import ( | |||
"os" | |||
"github.com/spf13/cobra" | |||
"github.com/tendermint/tendermint/privval" | |||
"github.com/tendermint/tmlibs/log" | |||
) | |||
// ResetAllCmd removes the database of this Tendermint core | |||
// instance. | |||
var ResetAllCmd = &cobra.Command{ | |||
Use: "unsafe_reset_all", | |||
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state", | |||
Run: resetAll, | |||
} | |||
// ResetPrivValidatorCmd resets the private validator files. | |||
var ResetPrivValidatorCmd = &cobra.Command{ | |||
Use: "unsafe_reset_priv_validator", | |||
Short: "(unsafe) Reset this node's validator to genesis state", | |||
Run: resetPrivValidator, | |||
} | |||
// XXX: this is totally unsafe. | |||
// it's only suitable for testnets. | |||
func resetAll(cmd *cobra.Command, args []string) { | |||
ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidatorFile(), logger) | |||
} | |||
// XXX: this is totally unsafe. | |||
// it's only suitable for testnets. | |||
func resetPrivValidator(cmd *cobra.Command, args []string) { | |||
resetFilePV(config.PrivValidatorFile(), logger) | |||
} | |||
// ResetAll removes the privValidator and address book files plus all data. | |||
// Exported so other CLI tools can use it. | |||
func ResetAll(dbDir, addrBookFile, privValFile string, logger log.Logger) { | |||
resetFilePV(privValFile, logger) | |||
removeAddrBook(addrBookFile, logger) | |||
if err := os.RemoveAll(dbDir); err == nil { | |||
logger.Info("Removed all blockchain history", "dir", dbDir) | |||
} else { | |||
logger.Error("Error removing all blockchain history", "dir", dbDir, "err", err) | |||
} | |||
} | |||
func resetFilePV(privValFile string, logger log.Logger) { | |||
if _, err := os.Stat(privValFile); err == nil { | |||
pv := privval.LoadFilePV(privValFile) | |||
pv.Reset() | |||
logger.Info("Reset private validator file to genesis state", "file", privValFile) | |||
} else { | |||
pv := privval.GenFilePV(privValFile) | |||
pv.Save() | |||
logger.Info("Generated private validator file", "file", privValFile) | |||
} | |||
} | |||
func removeAddrBook(addrBookFile string, logger log.Logger) { | |||
if err := os.Remove(addrBookFile); err == nil { | |||
logger.Info("Removed existing address book", "file", addrBookFile) | |||
} else if !os.IsNotExist(err) { | |||
logger.Info("Error removing address book", "file", addrBookFile, "err", err) | |||
} | |||
} |
@ -1,63 +0,0 @@ | |||
package commands | |||
import ( | |||
"os" | |||
"github.com/spf13/cobra" | |||
"github.com/spf13/viper" | |||
cfg "github.com/tendermint/tendermint/config" | |||
"github.com/tendermint/tmlibs/cli" | |||
tmflags "github.com/tendermint/tmlibs/cli/flags" | |||
"github.com/tendermint/tmlibs/log" | |||
) | |||
var ( | |||
config = cfg.DefaultConfig() | |||
logger = log.NewTMLogger(log.NewSyncWriter(os.Stdout)) | |||
) | |||
func init() { | |||
registerFlagsRootCmd(RootCmd) | |||
} | |||
func registerFlagsRootCmd(cmd *cobra.Command) { | |||
cmd.PersistentFlags().String("log_level", config.LogLevel, "Log level") | |||
} | |||
// ParseConfig retrieves the default environment configuration, | |||
// sets up the Tendermint root and ensures that the root exists | |||
func ParseConfig() (*cfg.Config, error) { | |||
conf := cfg.DefaultConfig() | |||
err := viper.Unmarshal(conf) | |||
if err != nil { | |||
return nil, err | |||
} | |||
conf.SetRoot(conf.RootDir) | |||
cfg.EnsureRoot(conf.RootDir) | |||
return conf, err | |||
} | |||
// RootCmd is the root command for Tendermint core. | |||
var RootCmd = &cobra.Command{ | |||
Use: "tendermint", | |||
Short: "Tendermint Core (BFT Consensus) in Go", | |||
PersistentPreRunE: func(cmd *cobra.Command, args []string) (err error) { | |||
if cmd.Name() == VersionCmd.Name() { | |||
return nil | |||
} | |||
config, err = ParseConfig() | |||
if err != nil { | |||
return err | |||
} | |||
logger, err = tmflags.ParseLogLevel(config.LogLevel, logger, cfg.DefaultLogLevel()) | |||
if err != nil { | |||
return err | |||
} | |||
if viper.GetBool(cli.TraceFlag) { | |||
logger = log.NewTracingLogger(logger) | |||
} | |||
logger = logger.With("module", "main") | |||
return nil | |||
}, | |||
} |
@ -1,176 +0,0 @@ | |||
package commands | |||
import ( | |||
"fmt" | |||
"io/ioutil" | |||
"os" | |||
"path/filepath" | |||
"strconv" | |||
"testing" | |||
"github.com/spf13/cobra" | |||
"github.com/spf13/viper" | |||
"github.com/stretchr/testify/assert" | |||
"github.com/stretchr/testify/require" | |||
cfg "github.com/tendermint/tendermint/config" | |||
"github.com/tendermint/tmlibs/cli" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
var ( | |||
defaultRoot = os.ExpandEnv("$HOME/.some/test/dir") | |||
) | |||
const ( | |||
rootName = "root" | |||
) | |||
// clearConfig clears env vars, the given root dir, and resets viper. | |||
func clearConfig(dir string) { | |||
if err := os.Unsetenv("TMHOME"); err != nil { | |||
panic(err) | |||
} | |||
if err := os.Unsetenv("TM_HOME"); err != nil { | |||
panic(err) | |||
} | |||
if err := os.RemoveAll(dir); err != nil { | |||
panic(err) | |||
} | |||
viper.Reset() | |||
config = cfg.DefaultConfig() | |||
} | |||
// prepare new rootCmd | |||
func testRootCmd() *cobra.Command { | |||
rootCmd := &cobra.Command{ | |||
Use: RootCmd.Use, | |||
PersistentPreRunE: RootCmd.PersistentPreRunE, | |||
Run: func(cmd *cobra.Command, args []string) {}, | |||
} | |||
registerFlagsRootCmd(rootCmd) | |||
var l string | |||
rootCmd.PersistentFlags().String("log", l, "Log") | |||
return rootCmd | |||
} | |||
func testSetup(rootDir string, args []string, env map[string]string) error { | |||
clearConfig(defaultRoot) | |||
rootCmd := testRootCmd() | |||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", defaultRoot) | |||
// run with the args and env | |||
args = append([]string{rootCmd.Use}, args...) | |||
return cli.RunWithArgs(cmd, args, env) | |||
} | |||
func TestRootHome(t *testing.T) { | |||
newRoot := filepath.Join(defaultRoot, "something-else") | |||
cases := []struct { | |||
args []string | |||
env map[string]string | |||
root string | |||
}{ | |||
{nil, nil, defaultRoot}, | |||
{[]string{"--home", newRoot}, nil, newRoot}, | |||
{nil, map[string]string{"TMHOME": newRoot}, newRoot}, | |||
} | |||
for i, tc := range cases { | |||
idxString := strconv.Itoa(i) | |||
err := testSetup(defaultRoot, tc.args, tc.env) | |||
require.Nil(t, err, idxString) | |||
assert.Equal(t, tc.root, config.RootDir, idxString) | |||
assert.Equal(t, tc.root, config.P2P.RootDir, idxString) | |||
assert.Equal(t, tc.root, config.Consensus.RootDir, idxString) | |||
assert.Equal(t, tc.root, config.Mempool.RootDir, idxString) | |||
} | |||
} | |||
func TestRootFlagsEnv(t *testing.T) { | |||
// defaults | |||
defaults := cfg.DefaultConfig() | |||
defaultLogLvl := defaults.LogLevel | |||
cases := []struct { | |||
args []string | |||
env map[string]string | |||
logLevel string | |||
}{ | |||
{[]string{"--log", "debug"}, nil, defaultLogLvl}, // wrong flag | |||
{[]string{"--log_level", "debug"}, nil, "debug"}, // right flag | |||
{nil, map[string]string{"TM_LOW": "debug"}, defaultLogLvl}, // wrong env flag | |||
{nil, map[string]string{"MT_LOG_LEVEL": "debug"}, defaultLogLvl}, // wrong env prefix | |||
{nil, map[string]string{"TM_LOG_LEVEL": "debug"}, "debug"}, // right env | |||
} | |||
for i, tc := range cases { | |||
idxString := strconv.Itoa(i) | |||
err := testSetup(defaultRoot, tc.args, tc.env) | |||
require.Nil(t, err, idxString) | |||
assert.Equal(t, tc.logLevel, config.LogLevel, idxString) | |||
} | |||
} | |||
func TestRootConfig(t *testing.T) { | |||
// write non-default config | |||
nonDefaultLogLvl := "abc:debug" | |||
cvals := map[string]string{ | |||
"log_level": nonDefaultLogLvl, | |||
} | |||
cases := []struct { | |||
args []string | |||
env map[string]string | |||
logLvl string | |||
}{ | |||
{nil, nil, nonDefaultLogLvl}, // should load config | |||
{[]string{"--log_level=abc:info"}, nil, "abc:info"}, // flag over rides | |||
{nil, map[string]string{"TM_LOG_LEVEL": "abc:info"}, "abc:info"}, // env over rides | |||
} | |||
for i, tc := range cases { | |||
idxString := strconv.Itoa(i) | |||
clearConfig(defaultRoot) | |||
// XXX: path must match cfg.defaultConfigPath | |||
configFilePath := filepath.Join(defaultRoot, "config") | |||
err := cmn.EnsureDir(configFilePath, 0700) | |||
require.Nil(t, err) | |||
// write the non-defaults to a different path | |||
// TODO: support writing sub configs so we can test that too | |||
err = WriteConfigVals(configFilePath, cvals) | |||
require.Nil(t, err) | |||
rootCmd := testRootCmd() | |||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", defaultRoot) | |||
// run with the args and env | |||
tc.args = append([]string{rootCmd.Use}, tc.args...) | |||
err = cli.RunWithArgs(cmd, tc.args, tc.env) | |||
require.Nil(t, err, idxString) | |||
assert.Equal(t, tc.logLvl, config.LogLevel, idxString) | |||
} | |||
} | |||
// WriteConfigVals writes a toml file with the given values. | |||
// It returns an error if writing was impossible. | |||
func WriteConfigVals(dir string, vals map[string]string) error { | |||
data := "" | |||
for k, v := range vals { | |||
data = data + fmt.Sprintf("%s = \"%s\"\n", k, v) | |||
} | |||
cfile := filepath.Join(dir, "config.toml") | |||
return ioutil.WriteFile(cfile, []byte(data), 0666) | |||
} |
@ -1,72 +0,0 @@ | |||
package commands | |||
import ( | |||
"fmt" | |||
"github.com/spf13/cobra" | |||
nm "github.com/tendermint/tendermint/node" | |||
) | |||
// AddNodeFlags exposes some common configuration options on the command-line | |||
// These are exposed for convenience of commands embedding a tendermint node | |||
func AddNodeFlags(cmd *cobra.Command) { | |||
// bind flags | |||
cmd.Flags().String("moniker", config.Moniker, "Node Name") | |||
// priv val flags | |||
cmd.Flags().String("priv_validator_laddr", config.PrivValidatorListenAddr, "Socket address to listen on for connections from external priv_validator process") | |||
// node flags | |||
cmd.Flags().Bool("fast_sync", config.FastSync, "Fast blockchain syncing") | |||
// abci flags | |||
cmd.Flags().String("proxy_app", config.ProxyApp, "Proxy app address, or 'nilapp' or 'kvstore' for local testing.") | |||
cmd.Flags().String("abci", config.ABCI, "Specify abci transport (socket | grpc)") | |||
// rpc flags | |||
cmd.Flags().String("rpc.laddr", config.RPC.ListenAddress, "RPC listen address. Port required") | |||
cmd.Flags().String("rpc.grpc_laddr", config.RPC.GRPCListenAddress, "GRPC listen address (BroadcastTx only). Port required") | |||
cmd.Flags().Bool("rpc.unsafe", config.RPC.Unsafe, "Enabled unsafe rpc methods") | |||
// p2p flags | |||
cmd.Flags().String("p2p.laddr", config.P2P.ListenAddress, "Node listen address. (0.0.0.0:0 means any interface, any port)") | |||
cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "Comma-delimited ID@host:port seed nodes") | |||
cmd.Flags().String("p2p.persistent_peers", config.P2P.PersistentPeers, "Comma-delimited ID@host:port persistent peers") | |||
cmd.Flags().Bool("p2p.skip_upnp", config.P2P.SkipUPNP, "Skip UPNP configuration") | |||
cmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "Enable/disable Peer-Exchange") | |||
cmd.Flags().Bool("p2p.seed_mode", config.P2P.SeedMode, "Enable/disable seed mode") | |||
cmd.Flags().String("p2p.private_peer_ids", config.P2P.PrivatePeerIDs, "Comma-delimited private peer IDs") | |||
// consensus flags | |||
cmd.Flags().Bool("consensus.create_empty_blocks", config.Consensus.CreateEmptyBlocks, "Set this to false to only produce blocks when there are txs or when the AppHash changes") | |||
} | |||
// NewRunNodeCmd returns the command that allows the CLI to start a node. | |||
// It can be used with a custom PrivValidator and in-process ABCI application. | |||
func NewRunNodeCmd(nodeProvider nm.NodeProvider) *cobra.Command { | |||
cmd := &cobra.Command{ | |||
Use: "node", | |||
Short: "Run the tendermint node", | |||
RunE: func(cmd *cobra.Command, args []string) error { | |||
// Create & start node | |||
n, err := nodeProvider(config, logger) | |||
if err != nil { | |||
return fmt.Errorf("Failed to create node: %v", err) | |||
} | |||
if err := n.Start(); err != nil { | |||
return fmt.Errorf("Failed to start node: %v", err) | |||
} | |||
logger.Info("Started node", "nodeInfo", n.Switch().NodeInfo()) | |||
// Trap signal, run forever. | |||
n.RunForever() | |||
return nil | |||
}, | |||
} | |||
AddNodeFlags(cmd) | |||
return cmd | |||
} |
@ -1,27 +0,0 @@ | |||
package commands | |||
import ( | |||
"fmt" | |||
"github.com/spf13/cobra" | |||
"github.com/tendermint/tendermint/p2p" | |||
) | |||
// ShowNodeIDCmd dumps node's ID to the standard output. | |||
var ShowNodeIDCmd = &cobra.Command{ | |||
Use: "show_node_id", | |||
Short: "Show this node's ID", | |||
RunE: showNodeID, | |||
} | |||
func showNodeID(cmd *cobra.Command, args []string) error { | |||
nodeKey, err := p2p.LoadNodeKey(config.NodeKeyFile()) | |||
if err != nil { | |||
return err | |||
} | |||
fmt.Println(nodeKey.ID()) | |||
return nil | |||
} |
@ -1,22 +0,0 @@ | |||
package commands | |||
import ( | |||
"fmt" | |||
"github.com/spf13/cobra" | |||
"github.com/tendermint/tendermint/privval" | |||
) | |||
// ShowValidatorCmd adds capabilities for showing the validator info. | |||
var ShowValidatorCmd = &cobra.Command{ | |||
Use: "show_validator", | |||
Short: "Show this node's validator info", | |||
Run: showValidator, | |||
} | |||
func showValidator(cmd *cobra.Command, args []string) { | |||
privValidator := privval.LoadOrGenFilePV(config.PrivValidatorFile()) | |||
pubKeyJSONBytes, _ := cdc.MarshalJSON(privValidator.GetPubKey()) | |||
fmt.Println(string(pubKeyJSONBytes)) | |||
} |
@ -1,183 +0,0 @@ | |||
package commands | |||
import ( | |||
"fmt" | |||
"net" | |||
"os" | |||
"path/filepath" | |||
"strings" | |||
"time" | |||
"github.com/spf13/cobra" | |||
cfg "github.com/tendermint/tendermint/config" | |||
"github.com/tendermint/tendermint/p2p" | |||
"github.com/tendermint/tendermint/privval" | |||
"github.com/tendermint/tendermint/types" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
var ( | |||
nValidators int | |||
nNonValidators int | |||
outputDir string | |||
nodeDirPrefix string | |||
populatePersistentPeers bool | |||
hostnamePrefix string | |||
startingIPAddress string | |||
p2pPort int | |||
) | |||
const ( | |||
nodeDirPerm = 0755 | |||
) | |||
func init() { | |||
TestnetFilesCmd.Flags().IntVar(&nValidators, "v", 4, | |||
"Number of validators to initialize the testnet with") | |||
TestnetFilesCmd.Flags().IntVar(&nNonValidators, "n", 0, | |||
"Number of non-validators to initialize the testnet with") | |||
TestnetFilesCmd.Flags().StringVar(&outputDir, "o", "./mytestnet", | |||
"Directory to store initialization data for the testnet") | |||
TestnetFilesCmd.Flags().StringVar(&nodeDirPrefix, "node-dir-prefix", "node", | |||
"Prefix the directory name for each node with (node results in node0, node1, ...)") | |||
TestnetFilesCmd.Flags().BoolVar(&populatePersistentPeers, "populate-persistent-peers", true, | |||
"Update config of each node with the list of persistent peers build using either hostname-prefix or starting-ip-address") | |||
TestnetFilesCmd.Flags().StringVar(&hostnamePrefix, "hostname-prefix", "node", | |||
"Hostname prefix (node results in persistent peers list ID0@node0:26656, ID1@node1:26656, ...)") | |||
TestnetFilesCmd.Flags().StringVar(&startingIPAddress, "starting-ip-address", "", | |||
"Starting IP address (192.168.0.1 results in persistent peers list ID0@192.168.0.1:26656, ID1@192.168.0.2:26656, ...)") | |||
TestnetFilesCmd.Flags().IntVar(&p2pPort, "p2p-port", 26656, | |||
"P2P Port") | |||
} | |||
// TestnetFilesCmd allows initialisation of files for a Tendermint testnet. | |||
var TestnetFilesCmd = &cobra.Command{ | |||
Use: "testnet", | |||
Short: "Initialize files for a Tendermint testnet", | |||
Long: `testnet will create "v" + "n" number of directories and populate each with | |||
necessary files (private validator, genesis, config, etc.). | |||
Note, strict routability for addresses is turned off in the config file. | |||
Optionally, it will fill in persistent_peers list in config file using either hostnames or IPs. | |||
Example: | |||
tendermint testnet --v 4 --o ./output --populate-persistent-peers --starting-ip-address 192.168.10.2 | |||
`, | |||
RunE: testnetFiles, | |||
} | |||
func testnetFiles(cmd *cobra.Command, args []string) error { | |||
config := cfg.DefaultConfig() | |||
genVals := make([]types.GenesisValidator, nValidators) | |||
for i := 0; i < nValidators; i++ { | |||
nodeDirName := cmn.Fmt("%s%d", nodeDirPrefix, i) | |||
nodeDir := filepath.Join(outputDir, nodeDirName) | |||
config.SetRoot(nodeDir) | |||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm) | |||
if err != nil { | |||
_ = os.RemoveAll(outputDir) | |||
return err | |||
} | |||
initFilesWithConfig(config) | |||
pvFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidator) | |||
pv := privval.LoadFilePV(pvFile) | |||
genVals[i] = types.GenesisValidator{ | |||
PubKey: pv.GetPubKey(), | |||
Power: 1, | |||
Name: nodeDirName, | |||
} | |||
} | |||
for i := 0; i < nNonValidators; i++ { | |||
nodeDir := filepath.Join(outputDir, cmn.Fmt("%s%d", nodeDirPrefix, i+nValidators)) | |||
config.SetRoot(nodeDir) | |||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm) | |||
if err != nil { | |||
_ = os.RemoveAll(outputDir) | |||
return err | |||
} | |||
initFilesWithConfig(config) | |||
} | |||
// Generate genesis doc from generated validators | |||
genDoc := &types.GenesisDoc{ | |||
GenesisTime: time.Now(), | |||
ChainID: "chain-" + cmn.RandStr(6), | |||
Validators: genVals, | |||
} | |||
// Write genesis file. | |||
for i := 0; i < nValidators+nNonValidators; i++ { | |||
nodeDir := filepath.Join(outputDir, cmn.Fmt("%s%d", nodeDirPrefix, i)) | |||
if err := genDoc.SaveAs(filepath.Join(nodeDir, config.BaseConfig.Genesis)); err != nil { | |||
_ = os.RemoveAll(outputDir) | |||
return err | |||
} | |||
} | |||
if populatePersistentPeers { | |||
err := populatePersistentPeersInConfigAndWriteIt(config) | |||
if err != nil { | |||
_ = os.RemoveAll(outputDir) | |||
return err | |||
} | |||
} | |||
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators) | |||
return nil | |||
} | |||
func hostnameOrIP(i int) string { | |||
if startingIPAddress != "" { | |||
ip := net.ParseIP(startingIPAddress) | |||
ip = ip.To4() | |||
if ip == nil { | |||
fmt.Printf("%v: non ipv4 address\n", startingIPAddress) | |||
os.Exit(1) | |||
} | |||
for j := 0; j < i; j++ { | |||
ip[3]++ | |||
} | |||
return ip.String() | |||
} | |||
return fmt.Sprintf("%s%d", hostnamePrefix, i) | |||
} | |||
func populatePersistentPeersInConfigAndWriteIt(config *cfg.Config) error { | |||
persistentPeers := make([]string, nValidators+nNonValidators) | |||
for i := 0; i < nValidators+nNonValidators; i++ { | |||
nodeDir := filepath.Join(outputDir, cmn.Fmt("%s%d", nodeDirPrefix, i)) | |||
config.SetRoot(nodeDir) | |||
nodeKey, err := p2p.LoadNodeKey(config.NodeKeyFile()) | |||
if err != nil { | |||
return err | |||
} | |||
persistentPeers[i] = p2p.IDAddressString(nodeKey.ID(), fmt.Sprintf("%s:%d", hostnameOrIP(i), p2pPort)) | |||
} | |||
persistentPeersList := strings.Join(persistentPeers, ",") | |||
for i := 0; i < nValidators+nNonValidators; i++ { | |||
nodeDir := filepath.Join(outputDir, cmn.Fmt("%s%d", nodeDirPrefix, i)) | |||
config.SetRoot(nodeDir) | |||
config.P2P.PersistentPeers = persistentPeersList | |||
config.P2P.AddrBookStrict = false | |||
// overwrite default config | |||
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config) | |||
} | |||
return nil | |||
} |
@ -1,18 +0,0 @@ | |||
package commands | |||
import ( | |||
"fmt" | |||
"github.com/spf13/cobra" | |||
"github.com/tendermint/tendermint/version" | |||
) | |||
// VersionCmd ... | |||
var VersionCmd = &cobra.Command{ | |||
Use: "version", | |||
Short: "Show version info", | |||
Run: func(cmd *cobra.Command, args []string) { | |||
fmt.Println(version.Version) | |||
}, | |||
} |
@ -1,12 +0,0 @@ | |||
package commands | |||
import ( | |||
"github.com/tendermint/go-amino" | |||
"github.com/tendermint/go-crypto" | |||
) | |||
var cdc = amino.NewCodec() | |||
func init() { | |||
crypto.RegisterAmino(cdc) | |||
} |
@ -1,48 +0,0 @@ | |||
package main | |||
import ( | |||
"os" | |||
"path/filepath" | |||
"github.com/tendermint/tmlibs/cli" | |||
cmd "github.com/tendermint/tendermint/cmd/tendermint/commands" | |||
cfg "github.com/tendermint/tendermint/config" | |||
nm "github.com/tendermint/tendermint/node" | |||
) | |||
func main() { | |||
rootCmd := cmd.RootCmd | |||
rootCmd.AddCommand( | |||
cmd.GenValidatorCmd, | |||
cmd.InitFilesCmd, | |||
cmd.ProbeUpnpCmd, | |||
cmd.LiteCmd, | |||
cmd.ReplayCmd, | |||
cmd.ReplayConsoleCmd, | |||
cmd.ResetAllCmd, | |||
cmd.ResetPrivValidatorCmd, | |||
cmd.ShowValidatorCmd, | |||
cmd.TestnetFilesCmd, | |||
cmd.ShowNodeIDCmd, | |||
cmd.GenNodeKeyCmd, | |||
cmd.VersionCmd) | |||
// NOTE: | |||
// Users wishing to: | |||
// * Use an external signer for their validators | |||
// * Supply an in-proc abci app | |||
// * Supply a genesis doc file from another source | |||
// * Provide their own DB implementation | |||
// can copy this file and use something other than the | |||
// DefaultNewNode function | |||
nodeFunc := nm.DefaultNewNode | |||
// Create & start node | |||
rootCmd.AddCommand(cmd.NewRunNodeCmd(nodeFunc)) | |||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv(filepath.Join("$HOME", cfg.DefaultTendermintDir))) | |||
if err := cmd.Execute(); err != nil { | |||
panic(err) | |||
} | |||
} |
@ -1,23 +0,0 @@ | |||
coverage: | |||
precision: 2 | |||
round: down | |||
range: "70...100" | |||
status: | |||
project: | |||
default: | |||
threshold: 1% | |||
patch: on | |||
changes: off | |||
comment: | |||
layout: "diff, files" | |||
behavior: default | |||
require_changes: no | |||
require_base: no | |||
require_head: yes | |||
ignore: | |||
- "docs" | |||
- "DOCKER" | |||
- "scripts" |
@ -1,632 +0,0 @@ | |||
package config | |||
import ( | |||
"fmt" | |||
"os" | |||
"path/filepath" | |||
"time" | |||
) | |||
const ( | |||
// FuzzModeDrop is a mode in which we randomly drop reads/writes, connections or sleep | |||
FuzzModeDrop = iota | |||
// FuzzModeDelay is a mode in which we randomly sleep | |||
FuzzModeDelay | |||
) | |||
// NOTE: Most of the structs & relevant comments + the | |||
// default configuration options were used to manually | |||
// generate the config.toml. Please reflect any changes | |||
// made here in the defaultConfigTemplate constant in | |||
// config/toml.go | |||
// NOTE: tmlibs/cli must know to look in the config dir! | |||
var ( | |||
DefaultTendermintDir = ".tendermint" | |||
defaultConfigDir = "config" | |||
defaultDataDir = "data" | |||
defaultConfigFileName = "config.toml" | |||
defaultGenesisJSONName = "genesis.json" | |||
defaultPrivValName = "priv_validator.json" | |||
defaultNodeKeyName = "node_key.json" | |||
defaultAddrBookName = "addrbook.json" | |||
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName) | |||
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName) | |||
defaultPrivValPath = filepath.Join(defaultConfigDir, defaultPrivValName) | |||
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName) | |||
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName) | |||
) | |||
// Config defines the top level configuration for a Tendermint node | |||
type Config struct { | |||
// Top level options use an anonymous struct | |||
BaseConfig `mapstructure:",squash"` | |||
// Options for services | |||
RPC *RPCConfig `mapstructure:"rpc"` | |||
P2P *P2PConfig `mapstructure:"p2p"` | |||
Mempool *MempoolConfig `mapstructure:"mempool"` | |||
Consensus *ConsensusConfig `mapstructure:"consensus"` | |||
TxIndex *TxIndexConfig `mapstructure:"tx_index"` | |||
Instrumentation *InstrumentationConfig `mapstructure:"instrumentation"` | |||
} | |||
// DefaultConfig returns a default configuration for a Tendermint node | |||
func DefaultConfig() *Config { | |||
return &Config{ | |||
BaseConfig: DefaultBaseConfig(), | |||
RPC: DefaultRPCConfig(), | |||
P2P: DefaultP2PConfig(), | |||
Mempool: DefaultMempoolConfig(), | |||
Consensus: DefaultConsensusConfig(), | |||
TxIndex: DefaultTxIndexConfig(), | |||
Instrumentation: DefaultInstrumentationConfig(), | |||
} | |||
} | |||
// TestConfig returns a configuration that can be used for testing | |||
func TestConfig() *Config { | |||
return &Config{ | |||
BaseConfig: TestBaseConfig(), | |||
RPC: TestRPCConfig(), | |||
P2P: TestP2PConfig(), | |||
Mempool: TestMempoolConfig(), | |||
Consensus: TestConsensusConfig(), | |||
TxIndex: TestTxIndexConfig(), | |||
Instrumentation: TestInstrumentationConfig(), | |||
} | |||
} | |||
// SetRoot sets the RootDir for all Config structs | |||
func (cfg *Config) SetRoot(root string) *Config { | |||
cfg.BaseConfig.RootDir = root | |||
cfg.RPC.RootDir = root | |||
cfg.P2P.RootDir = root | |||
cfg.Mempool.RootDir = root | |||
cfg.Consensus.RootDir = root | |||
return cfg | |||
} | |||
//----------------------------------------------------------------------------- | |||
// BaseConfig | |||
// BaseConfig defines the base configuration for a Tendermint node | |||
type BaseConfig struct { | |||
// chainID is unexposed and immutable but here for convenience | |||
chainID string | |||
// The root directory for all data. | |||
// This should be set in viper so it can unmarshal into this struct | |||
RootDir string `mapstructure:"home"` | |||
// Path to the JSON file containing the initial validator set and other meta data | |||
Genesis string `mapstructure:"genesis_file"` | |||
// Path to the JSON file containing the private key to use as a validator in the consensus protocol | |||
PrivValidator string `mapstructure:"priv_validator_file"` | |||
// A JSON file containing the private key to use for p2p authenticated encryption | |||
NodeKey string `mapstructure:"node_key_file"` | |||
// A custom human readable name for this node | |||
Moniker string `mapstructure:"moniker"` | |||
// TCP or UNIX socket address for Tendermint to listen on for | |||
// connections from an external PrivValidator process | |||
PrivValidatorListenAddr string `mapstructure:"priv_validator_laddr"` | |||
// TCP or UNIX socket address of the ABCI application, | |||
// or the name of an ABCI application compiled in with the Tendermint binary | |||
ProxyApp string `mapstructure:"proxy_app"` | |||
// Mechanism to connect to the ABCI application: socket | grpc | |||
ABCI string `mapstructure:"abci"` | |||
// Output level for logging | |||
LogLevel string `mapstructure:"log_level"` | |||
// TCP or UNIX socket address for the profiling server to listen on | |||
ProfListenAddress string `mapstructure:"prof_laddr"` | |||
// If this node is many blocks behind the tip of the chain, FastSync | |||
// allows them to catchup quickly by downloading blocks in parallel | |||
// and verifying their commits | |||
FastSync bool `mapstructure:"fast_sync"` | |||
// If true, query the ABCI app on connecting to a new peer | |||
// so the app can decide if we should keep the connection or not | |||
FilterPeers bool `mapstructure:"filter_peers"` // false | |||
// Database backend: leveldb | memdb | |||
DBBackend string `mapstructure:"db_backend"` | |||
// Database directory | |||
DBPath string `mapstructure:"db_dir"` | |||
} | |||
// DefaultBaseConfig returns a default base configuration for a Tendermint node | |||
func DefaultBaseConfig() BaseConfig { | |||
return BaseConfig{ | |||
Genesis: defaultGenesisJSONPath, | |||
PrivValidator: defaultPrivValPath, | |||
NodeKey: defaultNodeKeyPath, | |||
Moniker: defaultMoniker, | |||
ProxyApp: "tcp://127.0.0.1:26658", | |||
ABCI: "socket", | |||
LogLevel: DefaultPackageLogLevels(), | |||
ProfListenAddress: "", | |||
FastSync: true, | |||
FilterPeers: false, | |||
DBBackend: "leveldb", | |||
DBPath: "data", | |||
} | |||
} | |||
// TestBaseConfig returns a base configuration for testing a Tendermint node | |||
func TestBaseConfig() BaseConfig { | |||
cfg := DefaultBaseConfig() | |||
cfg.chainID = "tendermint_test" | |||
cfg.ProxyApp = "kvstore" | |||
cfg.FastSync = false | |||
cfg.DBBackend = "memdb" | |||
return cfg | |||
} | |||
func (cfg BaseConfig) ChainID() string { | |||
return cfg.chainID | |||
} | |||
// GenesisFile returns the full path to the genesis.json file | |||
func (cfg BaseConfig) GenesisFile() string { | |||
return rootify(cfg.Genesis, cfg.RootDir) | |||
} | |||
// PrivValidatorFile returns the full path to the priv_validator.json file | |||
func (cfg BaseConfig) PrivValidatorFile() string { | |||
return rootify(cfg.PrivValidator, cfg.RootDir) | |||
} | |||
// NodeKeyFile returns the full path to the node_key.json file | |||
func (cfg BaseConfig) NodeKeyFile() string { | |||
return rootify(cfg.NodeKey, cfg.RootDir) | |||
} | |||
// DBDir returns the full path to the database directory | |||
func (cfg BaseConfig) DBDir() string { | |||
return rootify(cfg.DBPath, cfg.RootDir) | |||
} | |||
// DefaultLogLevel returns a default log level of "error" | |||
func DefaultLogLevel() string { | |||
return "error" | |||
} | |||
// DefaultPackageLogLevels returns a default log level setting so all packages | |||
// log at "error", while the `state` and `main` packages log at "info" | |||
func DefaultPackageLogLevels() string { | |||
return fmt.Sprintf("main:info,state:info,*:%s", DefaultLogLevel()) | |||
} | |||
//----------------------------------------------------------------------------- | |||
// RPCConfig | |||
// RPCConfig defines the configuration options for the Tendermint RPC server | |||
type RPCConfig struct { | |||
RootDir string `mapstructure:"home"` | |||
// TCP or UNIX socket address for the RPC server to listen on | |||
ListenAddress string `mapstructure:"laddr"` | |||
// TCP or UNIX socket address for the gRPC server to listen on | |||
// NOTE: This server only supports /broadcast_tx_commit | |||
GRPCListenAddress string `mapstructure:"grpc_laddr"` | |||
// Activate unsafe RPC commands like /dial_persistent_peers and /unsafe_flush_mempool | |||
Unsafe bool `mapstructure:"unsafe"` | |||
} | |||
// DefaultRPCConfig returns a default configuration for the RPC server | |||
func DefaultRPCConfig() *RPCConfig { | |||
return &RPCConfig{ | |||
ListenAddress: "tcp://0.0.0.0:26657", | |||
GRPCListenAddress: "", | |||
Unsafe: false, | |||
} | |||
} | |||
// TestRPCConfig returns a configuration for testing the RPC server | |||
func TestRPCConfig() *RPCConfig { | |||
cfg := DefaultRPCConfig() | |||
cfg.ListenAddress = "tcp://0.0.0.0:36657" | |||
cfg.GRPCListenAddress = "tcp://0.0.0.0:36658" | |||
cfg.Unsafe = true | |||
return cfg | |||
} | |||
//----------------------------------------------------------------------------- | |||
// P2PConfig | |||
// P2PConfig defines the configuration options for the Tendermint peer-to-peer networking layer | |||
type P2PConfig struct { | |||
RootDir string `mapstructure:"home"` | |||
// Address to listen for incoming connections | |||
ListenAddress string `mapstructure:"laddr"` | |||
// Comma separated list of seed nodes to connect to | |||
// We only use these if we can’t connect to peers in the addrbook | |||
Seeds string `mapstructure:"seeds"` | |||
// Comma separated list of nodes to keep persistent connections to | |||
// Do not add private peers to this list if you don't want them advertised | |||
PersistentPeers string `mapstructure:"persistent_peers"` | |||
// Skip UPNP port forwarding | |||
SkipUPNP bool `mapstructure:"skip_upnp"` | |||
// Path to address book | |||
AddrBook string `mapstructure:"addr_book_file"` | |||
// Set true for strict address routability rules | |||
AddrBookStrict bool `mapstructure:"addr_book_strict"` | |||
// Maximum number of peers to connect to | |||
MaxNumPeers int `mapstructure:"max_num_peers"` | |||
// Time to wait before flushing messages out on the connection, in ms | |||
FlushThrottleTimeout int `mapstructure:"flush_throttle_timeout"` | |||
// Maximum size of a message packet payload, in bytes | |||
MaxPacketMsgPayloadSize int `mapstructure:"max_packet_msg_payload_size"` | |||
// Rate at which packets can be sent, in bytes/second | |||
SendRate int64 `mapstructure:"send_rate"` | |||
// Rate at which packets can be received, in bytes/second | |||
RecvRate int64 `mapstructure:"recv_rate"` | |||
// Set true to enable the peer-exchange reactor | |||
PexReactor bool `mapstructure:"pex"` | |||
// Seed mode, in which node constantly crawls the network and looks for | |||
// peers. If another node asks it for addresses, it responds and disconnects. | |||
// | |||
// Does not work if the peer-exchange reactor is disabled. | |||
SeedMode bool `mapstructure:"seed_mode"` | |||
// Comma separated list of peer IDs to keep private (will not be gossiped to | |||
// other peers) | |||
PrivatePeerIDs string `mapstructure:"private_peer_ids"` | |||
// Toggle to disable guard against peers connecting from the same ip. | |||
AllowDuplicateIP bool `mapstructure:"allow_duplicate_ip"` | |||
// Peer connection configuration. | |||
HandshakeTimeout time.Duration `mapstructure:"handshake_timeout"` | |||
DialTimeout time.Duration `mapstructure:"dial_timeout"` | |||
// Testing params. | |||
// Force dial to fail | |||
TestDialFail bool `mapstructure:"test_dial_fail"` | |||
// FUzz connection | |||
TestFuzz bool `mapstructure:"test_fuzz"` | |||
TestFuzzConfig *FuzzConnConfig `mapstructure:"test_fuzz_config"` | |||
} | |||
// DefaultP2PConfig returns a default configuration for the peer-to-peer layer | |||
func DefaultP2PConfig() *P2PConfig { | |||
return &P2PConfig{ | |||
ListenAddress: "tcp://0.0.0.0:26656", | |||
AddrBook: defaultAddrBookPath, | |||
AddrBookStrict: true, | |||
MaxNumPeers: 50, | |||
FlushThrottleTimeout: 100, | |||
MaxPacketMsgPayloadSize: 1024, // 1 kB | |||
SendRate: 512000, // 500 kB/s | |||
RecvRate: 512000, // 500 kB/s | |||
PexReactor: true, | |||
SeedMode: false, | |||
AllowDuplicateIP: true, // so non-breaking yet | |||
HandshakeTimeout: 20 * time.Second, | |||
DialTimeout: 3 * time.Second, | |||
TestDialFail: false, | |||
TestFuzz: false, | |||
TestFuzzConfig: DefaultFuzzConnConfig(), | |||
} | |||
} | |||
// TestP2PConfig returns a configuration for testing the peer-to-peer layer | |||
func TestP2PConfig() *P2PConfig { | |||
cfg := DefaultP2PConfig() | |||
cfg.ListenAddress = "tcp://0.0.0.0:36656" | |||
cfg.SkipUPNP = true | |||
cfg.FlushThrottleTimeout = 10 | |||
cfg.AllowDuplicateIP = true | |||
return cfg | |||
} | |||
// AddrBookFile returns the full path to the address book | |||
func (cfg *P2PConfig) AddrBookFile() string { | |||
return rootify(cfg.AddrBook, cfg.RootDir) | |||
} | |||
// FuzzConnConfig is a FuzzedConnection configuration. | |||
type FuzzConnConfig struct { | |||
Mode int | |||
MaxDelay time.Duration | |||
ProbDropRW float64 | |||
ProbDropConn float64 | |||
ProbSleep float64 | |||
} | |||
// DefaultFuzzConnConfig returns the default config. | |||
func DefaultFuzzConnConfig() *FuzzConnConfig { | |||
return &FuzzConnConfig{ | |||
Mode: FuzzModeDrop, | |||
MaxDelay: 3 * time.Second, | |||
ProbDropRW: 0.2, | |||
ProbDropConn: 0.00, | |||
ProbSleep: 0.00, | |||
} | |||
} | |||
//----------------------------------------------------------------------------- | |||
// MempoolConfig | |||
// MempoolConfig defines the configuration options for the Tendermint mempool | |||
type MempoolConfig struct { | |||
RootDir string `mapstructure:"home"` | |||
Recheck bool `mapstructure:"recheck"` | |||
RecheckEmpty bool `mapstructure:"recheck_empty"` | |||
Broadcast bool `mapstructure:"broadcast"` | |||
WalPath string `mapstructure:"wal_dir"` | |||
Size int `mapstructure:"size"` | |||
CacheSize int `mapstructure:"cache_size"` | |||
} | |||
// DefaultMempoolConfig returns a default configuration for the Tendermint mempool | |||
func DefaultMempoolConfig() *MempoolConfig { | |||
return &MempoolConfig{ | |||
Recheck: true, | |||
RecheckEmpty: true, | |||
Broadcast: true, | |||
WalPath: filepath.Join(defaultDataDir, "mempool.wal"), | |||
Size: 100000, | |||
CacheSize: 100000, | |||
} | |||
} | |||
// TestMempoolConfig returns a configuration for testing the Tendermint mempool | |||
func TestMempoolConfig() *MempoolConfig { | |||
cfg := DefaultMempoolConfig() | |||
cfg.CacheSize = 1000 | |||
return cfg | |||
} | |||
// WalDir returns the full path to the mempool's write-ahead log | |||
func (cfg *MempoolConfig) WalDir() string { | |||
return rootify(cfg.WalPath, cfg.RootDir) | |||
} | |||
//----------------------------------------------------------------------------- | |||
// ConsensusConfig | |||
// ConsensusConfig defines the configuration for the Tendermint consensus service, | |||
// including timeouts and details about the WAL and the block structure. | |||
type ConsensusConfig struct { | |||
RootDir string `mapstructure:"home"` | |||
WalPath string `mapstructure:"wal_file"` | |||
walFile string // overrides WalPath if set | |||
// All timeouts are in milliseconds | |||
TimeoutPropose int `mapstructure:"timeout_propose"` | |||
TimeoutProposeDelta int `mapstructure:"timeout_propose_delta"` | |||
TimeoutPrevote int `mapstructure:"timeout_prevote"` | |||
TimeoutPrevoteDelta int `mapstructure:"timeout_prevote_delta"` | |||
TimeoutPrecommit int `mapstructure:"timeout_precommit"` | |||
TimeoutPrecommitDelta int `mapstructure:"timeout_precommit_delta"` | |||
TimeoutCommit int `mapstructure:"timeout_commit"` | |||
// Make progress as soon as we have all the precommits (as if TimeoutCommit = 0) | |||
SkipTimeoutCommit bool `mapstructure:"skip_timeout_commit"` | |||
// BlockSize | |||
MaxBlockSizeTxs int `mapstructure:"max_block_size_txs"` | |||
MaxBlockSizeBytes int `mapstructure:"max_block_size_bytes"` | |||
// EmptyBlocks mode and possible interval between empty blocks in seconds | |||
CreateEmptyBlocks bool `mapstructure:"create_empty_blocks"` | |||
CreateEmptyBlocksInterval int `mapstructure:"create_empty_blocks_interval"` | |||
// Reactor sleep duration parameters are in milliseconds | |||
PeerGossipSleepDuration int `mapstructure:"peer_gossip_sleep_duration"` | |||
PeerQueryMaj23SleepDuration int `mapstructure:"peer_query_maj23_sleep_duration"` | |||
} | |||
// DefaultConsensusConfig returns a default configuration for the consensus service | |||
func DefaultConsensusConfig() *ConsensusConfig { | |||
return &ConsensusConfig{ | |||
WalPath: filepath.Join(defaultDataDir, "cs.wal", "wal"), | |||
TimeoutPropose: 3000, | |||
TimeoutProposeDelta: 500, | |||
TimeoutPrevote: 1000, | |||
TimeoutPrevoteDelta: 500, | |||
TimeoutPrecommit: 1000, | |||
TimeoutPrecommitDelta: 500, | |||
TimeoutCommit: 1000, | |||
SkipTimeoutCommit: false, | |||
MaxBlockSizeTxs: 10000, | |||
MaxBlockSizeBytes: 1, // TODO | |||
CreateEmptyBlocks: true, | |||
CreateEmptyBlocksInterval: 0, | |||
PeerGossipSleepDuration: 100, | |||
PeerQueryMaj23SleepDuration: 2000, | |||
} | |||
} | |||
// TestConsensusConfig returns a configuration for testing the consensus service | |||
func TestConsensusConfig() *ConsensusConfig { | |||
cfg := DefaultConsensusConfig() | |||
cfg.TimeoutPropose = 100 | |||
cfg.TimeoutProposeDelta = 1 | |||
cfg.TimeoutPrevote = 10 | |||
cfg.TimeoutPrevoteDelta = 1 | |||
cfg.TimeoutPrecommit = 10 | |||
cfg.TimeoutPrecommitDelta = 1 | |||
cfg.TimeoutCommit = 10 | |||
cfg.SkipTimeoutCommit = true | |||
cfg.PeerGossipSleepDuration = 5 | |||
cfg.PeerQueryMaj23SleepDuration = 250 | |||
return cfg | |||
} | |||
// WaitForTxs returns true if the consensus should wait for transactions before entering the propose step | |||
func (cfg *ConsensusConfig) WaitForTxs() bool { | |||
return !cfg.CreateEmptyBlocks || cfg.CreateEmptyBlocksInterval > 0 | |||
} | |||
// EmptyBlocks returns the amount of time to wait before proposing an empty block or starting the propose timer if there are no txs available | |||
func (cfg *ConsensusConfig) EmptyBlocksInterval() time.Duration { | |||
return time.Duration(cfg.CreateEmptyBlocksInterval) * time.Second | |||
} | |||
// Propose returns the amount of time to wait for a proposal | |||
func (cfg *ConsensusConfig) Propose(round int) time.Duration { | |||
return time.Duration(cfg.TimeoutPropose+cfg.TimeoutProposeDelta*round) * time.Millisecond | |||
} | |||
// Prevote returns the amount of time to wait for straggler votes after receiving any +2/3 prevotes | |||
func (cfg *ConsensusConfig) Prevote(round int) time.Duration { | |||
return time.Duration(cfg.TimeoutPrevote+cfg.TimeoutPrevoteDelta*round) * time.Millisecond | |||
} | |||
// Precommit returns the amount of time to wait for straggler votes after receiving any +2/3 precommits | |||
func (cfg *ConsensusConfig) Precommit(round int) time.Duration { | |||
return time.Duration(cfg.TimeoutPrecommit+cfg.TimeoutPrecommitDelta*round) * time.Millisecond | |||
} | |||
// Commit returns the amount of time to wait for straggler votes after receiving +2/3 precommits for a single block (ie. a commit). | |||
func (cfg *ConsensusConfig) Commit(t time.Time) time.Time { | |||
return t.Add(time.Duration(cfg.TimeoutCommit) * time.Millisecond) | |||
} | |||
// PeerGossipSleep returns the amount of time to sleep if there is nothing to send from the ConsensusReactor | |||
func (cfg *ConsensusConfig) PeerGossipSleep() time.Duration { | |||
return time.Duration(cfg.PeerGossipSleepDuration) * time.Millisecond | |||
} | |||
// PeerQueryMaj23Sleep returns the amount of time to sleep after each VoteSetMaj23Message is sent in the ConsensusReactor | |||
func (cfg *ConsensusConfig) PeerQueryMaj23Sleep() time.Duration { | |||
return time.Duration(cfg.PeerQueryMaj23SleepDuration) * time.Millisecond | |||
} | |||
// WalFile returns the full path to the write-ahead log file | |||
func (cfg *ConsensusConfig) WalFile() string { | |||
if cfg.walFile != "" { | |||
return cfg.walFile | |||
} | |||
return rootify(cfg.WalPath, cfg.RootDir) | |||
} | |||
// SetWalFile sets the path to the write-ahead log file | |||
func (cfg *ConsensusConfig) SetWalFile(walFile string) { | |||
cfg.walFile = walFile | |||
} | |||
//----------------------------------------------------------------------------- | |||
// TxIndexConfig | |||
// TxIndexConfig defines the configuration for the transaction | |||
// indexer, including tags to index. | |||
type TxIndexConfig struct { | |||
// What indexer to use for transactions | |||
// | |||
// Options: | |||
// 1) "null" | |||
// 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend). | |||
Indexer string `mapstructure:"indexer"` | |||
// Comma-separated list of tags to index (by default the only tag is tx hash) | |||
// | |||
// It's recommended to index only a subset of tags due to possible memory | |||
// bloat. This is, of course, depends on the indexer's DB and the volume of | |||
// transactions. | |||
IndexTags string `mapstructure:"index_tags"` | |||
// When set to true, tells indexer to index all tags. Note this may be not | |||
// desirable (see the comment above). IndexTags has a precedence over | |||
// IndexAllTags (i.e. when given both, IndexTags will be indexed). | |||
IndexAllTags bool `mapstructure:"index_all_tags"` | |||
} | |||
// DefaultTxIndexConfig returns a default configuration for the transaction indexer. | |||
func DefaultTxIndexConfig() *TxIndexConfig { | |||
return &TxIndexConfig{ | |||
Indexer: "kv", | |||
IndexTags: "", | |||
IndexAllTags: false, | |||
} | |||
} | |||
// TestTxIndexConfig returns a default configuration for the transaction indexer. | |||
func TestTxIndexConfig() *TxIndexConfig { | |||
return DefaultTxIndexConfig() | |||
} | |||
//----------------------------------------------------------------------------- | |||
// InstrumentationConfig | |||
// InstrumentationConfig defines the configuration for metrics reporting. | |||
type InstrumentationConfig struct { | |||
// When true, Prometheus metrics are served under /metrics on | |||
// PrometheusListenAddr. | |||
// Check out the documentation for the list of available metrics. | |||
Prometheus bool `mapstructure:"prometheus"` | |||
// Address to listen for Prometheus collector(s) connections. | |||
PrometheusListenAddr string `mapstructure:"prometheus_listen_addr"` | |||
} | |||
// DefaultInstrumentationConfig returns a default configuration for metrics | |||
// reporting. | |||
func DefaultInstrumentationConfig() *InstrumentationConfig { | |||
return &InstrumentationConfig{ | |||
Prometheus: false, | |||
PrometheusListenAddr: ":26660", | |||
} | |||
} | |||
// TestInstrumentationConfig returns a default configuration for metrics | |||
// reporting. | |||
func TestInstrumentationConfig() *InstrumentationConfig { | |||
return DefaultInstrumentationConfig() | |||
} | |||
//----------------------------------------------------------------------------- | |||
// Utils | |||
// helper function to make config creation independent of root dir | |||
func rootify(path, root string) string { | |||
if filepath.IsAbs(path) { | |||
return path | |||
} | |||
return filepath.Join(root, path) | |||
} | |||
//----------------------------------------------------------------------------- | |||
// Moniker | |||
var defaultMoniker = getDefaultMoniker() | |||
// getDefaultMoniker returns a default moniker, which is the host name. If runtime | |||
// fails to get the host name, "anonymous" will be returned. | |||
func getDefaultMoniker() string { | |||
moniker, err := os.Hostname() | |||
if err != nil { | |||
moniker = "anonymous" | |||
} | |||
return moniker | |||
} |
@ -1,28 +0,0 @@ | |||
package config | |||
import ( | |||
"testing" | |||
"github.com/stretchr/testify/assert" | |||
) | |||
func TestDefaultConfig(t *testing.T) { | |||
assert := assert.New(t) | |||
// set up some defaults | |||
cfg := DefaultConfig() | |||
assert.NotNil(cfg.P2P) | |||
assert.NotNil(cfg.Mempool) | |||
assert.NotNil(cfg.Consensus) | |||
// check the root dir stuff... | |||
cfg.SetRoot("/foo") | |||
cfg.Genesis = "bar" | |||
cfg.DBPath = "/opt/data" | |||
cfg.Mempool.WalPath = "wal/mem/" | |||
assert.Equal("/foo/bar", cfg.GenesisFile()) | |||
assert.Equal("/opt/data", cfg.DBDir()) | |||
assert.Equal("/foo/wal/mem", cfg.Mempool.WalDir()) | |||
} |
@ -1,324 +0,0 @@ | |||
package config | |||
import ( | |||
"bytes" | |||
"os" | |||
"path/filepath" | |||
"text/template" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
var configTemplate *template.Template | |||
func init() { | |||
var err error | |||
if configTemplate, err = template.New("configFileTemplate").Parse(defaultConfigTemplate); err != nil { | |||
panic(err) | |||
} | |||
} | |||
/****** these are for production settings ***********/ | |||
// EnsureRoot creates the root, config, and data directories if they don't exist, | |||
// and panics if it fails. | |||
func EnsureRoot(rootDir string) { | |||
if err := cmn.EnsureDir(rootDir, 0700); err != nil { | |||
cmn.PanicSanity(err.Error()) | |||
} | |||
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultConfigDir), 0700); err != nil { | |||
cmn.PanicSanity(err.Error()) | |||
} | |||
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultDataDir), 0700); err != nil { | |||
cmn.PanicSanity(err.Error()) | |||
} | |||
configFilePath := filepath.Join(rootDir, defaultConfigFilePath) | |||
// Write default config file if missing. | |||
if !cmn.FileExists(configFilePath) { | |||
writeDefaultConfigFile(configFilePath) | |||
} | |||
} | |||
// XXX: this func should probably be called by cmd/tendermint/commands/init.go | |||
// alongside the writing of the genesis.json and priv_validator.json | |||
func writeDefaultConfigFile(configFilePath string) { | |||
WriteConfigFile(configFilePath, DefaultConfig()) | |||
} | |||
// WriteConfigFile renders config using the template and writes it to configFilePath. | |||
func WriteConfigFile(configFilePath string, config *Config) { | |||
var buffer bytes.Buffer | |||
if err := configTemplate.Execute(&buffer, config); err != nil { | |||
panic(err) | |||
} | |||
cmn.MustWriteFile(configFilePath, buffer.Bytes(), 0644) | |||
} | |||
// Note: any changes to the comments/variables/mapstructure | |||
// must be reflected in the appropriate struct in config/config.go | |||
const defaultConfigTemplate = `# This is a TOML config file. | |||
# For more information, see https://github.com/toml-lang/toml | |||
##### main base config options ##### | |||
# TCP or UNIX socket address of the ABCI application, | |||
# or the name of an ABCI application compiled in with the Tendermint binary | |||
proxy_app = "{{ .BaseConfig.ProxyApp }}" | |||
# A custom human readable name for this node | |||
moniker = "{{ .BaseConfig.Moniker }}" | |||
# If this node is many blocks behind the tip of the chain, FastSync | |||
# allows them to catchup quickly by downloading blocks in parallel | |||
# and verifying their commits | |||
fast_sync = {{ .BaseConfig.FastSync }} | |||
# Database backend: leveldb | memdb | |||
db_backend = "{{ .BaseConfig.DBBackend }}" | |||
# Database directory | |||
db_path = "{{ js .BaseConfig.DBPath }}" | |||
# Output level for logging, including package level options | |||
log_level = "{{ .BaseConfig.LogLevel }}" | |||
##### additional base config options ##### | |||
# Path to the JSON file containing the initial validator set and other meta data | |||
genesis_file = "{{ js .BaseConfig.Genesis }}" | |||
# Path to the JSON file containing the private key to use as a validator in the consensus protocol | |||
priv_validator_file = "{{ js .BaseConfig.PrivValidator }}" | |||
# Path to the JSON file containing the private key to use for node authentication in the p2p protocol | |||
node_key_file = "{{ js .BaseConfig.NodeKey}}" | |||
# Mechanism to connect to the ABCI application: socket | grpc | |||
abci = "{{ .BaseConfig.ABCI }}" | |||
# TCP or UNIX socket address for the profiling server to listen on | |||
prof_laddr = "{{ .BaseConfig.ProfListenAddress }}" | |||
# If true, query the ABCI app on connecting to a new peer | |||
# so the app can decide if we should keep the connection or not | |||
filter_peers = {{ .BaseConfig.FilterPeers }} | |||
##### advanced configuration options ##### | |||
##### rpc server configuration options ##### | |||
[rpc] | |||
# TCP or UNIX socket address for the RPC server to listen on | |||
laddr = "{{ .RPC.ListenAddress }}" | |||
# TCP or UNIX socket address for the gRPC server to listen on | |||
# NOTE: This server only supports /broadcast_tx_commit | |||
grpc_laddr = "{{ .RPC.GRPCListenAddress }}" | |||
# Activate unsafe RPC commands like /dial_seeds and /unsafe_flush_mempool | |||
unsafe = {{ .RPC.Unsafe }} | |||
##### peer to peer configuration options ##### | |||
[p2p] | |||
# Address to listen for incoming connections | |||
laddr = "{{ .P2P.ListenAddress }}" | |||
# Comma separated list of seed nodes to connect to | |||
seeds = "{{ .P2P.Seeds }}" | |||
# Comma separated list of nodes to keep persistent connections to | |||
# Do not add private peers to this list if you don't want them advertised | |||
persistent_peers = "{{ .P2P.PersistentPeers }}" | |||
# Path to address book | |||
addr_book_file = "{{ js .P2P.AddrBook }}" | |||
# Set true for strict address routability rules | |||
addr_book_strict = {{ .P2P.AddrBookStrict }} | |||
# Time to wait before flushing messages out on the connection, in ms | |||
flush_throttle_timeout = {{ .P2P.FlushThrottleTimeout }} | |||
# Maximum number of peers to connect to | |||
max_num_peers = {{ .P2P.MaxNumPeers }} | |||
# Maximum size of a message packet payload, in bytes | |||
max_packet_msg_payload_size = {{ .P2P.MaxPacketMsgPayloadSize }} | |||
# Rate at which packets can be sent, in bytes/second | |||
send_rate = {{ .P2P.SendRate }} | |||
# Rate at which packets can be received, in bytes/second | |||
recv_rate = {{ .P2P.RecvRate }} | |||
# Set true to enable the peer-exchange reactor | |||
pex = {{ .P2P.PexReactor }} | |||
# Seed mode, in which node constantly crawls the network and looks for | |||
# peers. If another node asks it for addresses, it responds and disconnects. | |||
# | |||
# Does not work if the peer-exchange reactor is disabled. | |||
seed_mode = {{ .P2P.SeedMode }} | |||
# Comma separated list of peer IDs to keep private (will not be gossiped to other peers) | |||
private_peer_ids = "{{ .P2P.PrivatePeerIDs }}" | |||
##### mempool configuration options ##### | |||
[mempool] | |||
recheck = {{ .Mempool.Recheck }} | |||
recheck_empty = {{ .Mempool.RecheckEmpty }} | |||
broadcast = {{ .Mempool.Broadcast }} | |||
wal_dir = "{{ js .Mempool.WalPath }}" | |||
# size of the mempool | |||
size = {{ .Mempool.Size }} | |||
# size of the cache (used to filter transactions we saw earlier) | |||
cache_size = {{ .Mempool.CacheSize }} | |||
##### consensus configuration options ##### | |||
[consensus] | |||
wal_file = "{{ js .Consensus.WalPath }}" | |||
# All timeouts are in milliseconds | |||
timeout_propose = {{ .Consensus.TimeoutPropose }} | |||
timeout_propose_delta = {{ .Consensus.TimeoutProposeDelta }} | |||
timeout_prevote = {{ .Consensus.TimeoutPrevote }} | |||
timeout_prevote_delta = {{ .Consensus.TimeoutPrevoteDelta }} | |||
timeout_precommit = {{ .Consensus.TimeoutPrecommit }} | |||
timeout_precommit_delta = {{ .Consensus.TimeoutPrecommitDelta }} | |||
timeout_commit = {{ .Consensus.TimeoutCommit }} | |||
# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0) | |||
skip_timeout_commit = {{ .Consensus.SkipTimeoutCommit }} | |||
# BlockSize | |||
max_block_size_txs = {{ .Consensus.MaxBlockSizeTxs }} | |||
max_block_size_bytes = {{ .Consensus.MaxBlockSizeBytes }} | |||
# EmptyBlocks mode and possible interval between empty blocks in seconds | |||
create_empty_blocks = {{ .Consensus.CreateEmptyBlocks }} | |||
create_empty_blocks_interval = {{ .Consensus.CreateEmptyBlocksInterval }} | |||
# Reactor sleep duration parameters are in milliseconds | |||
peer_gossip_sleep_duration = {{ .Consensus.PeerGossipSleepDuration }} | |||
peer_query_maj23_sleep_duration = {{ .Consensus.PeerQueryMaj23SleepDuration }} | |||
##### transactions indexer configuration options ##### | |||
[tx_index] | |||
# What indexer to use for transactions | |||
# | |||
# Options: | |||
# 1) "null" (default) | |||
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend). | |||
indexer = "{{ .TxIndex.Indexer }}" | |||
# Comma-separated list of tags to index (by default the only tag is tx hash) | |||
# | |||
# It's recommended to index only a subset of tags due to possible memory | |||
# bloat. This is, of course, depends on the indexer's DB and the volume of | |||
# transactions. | |||
index_tags = "{{ .TxIndex.IndexTags }}" | |||
# When set to true, tells indexer to index all tags. Note this may be not | |||
# desirable (see the comment above). IndexTags has a precedence over | |||
# IndexAllTags (i.e. when given both, IndexTags will be indexed). | |||
index_all_tags = {{ .TxIndex.IndexAllTags }} | |||
##### instrumentation configuration options ##### | |||
[instrumentation] | |||
# When true, Prometheus metrics are served under /metrics on | |||
# PrometheusListenAddr. | |||
# Check out the documentation for the list of available metrics. | |||
prometheus = {{ .Instrumentation.Prometheus }} | |||
# Address to listen for Prometheus collector(s) connections | |||
prometheus_listen_addr = "{{ .Instrumentation.PrometheusListenAddr }}" | |||
` | |||
/****** these are for test settings ***********/ | |||
func ResetTestRoot(testName string) *Config { | |||
rootDir := os.ExpandEnv("$HOME/.tendermint_test") | |||
rootDir = filepath.Join(rootDir, testName) | |||
// Remove ~/.tendermint_test_bak | |||
if cmn.FileExists(rootDir + "_bak") { | |||
if err := os.RemoveAll(rootDir + "_bak"); err != nil { | |||
cmn.PanicSanity(err.Error()) | |||
} | |||
} | |||
// Move ~/.tendermint_test to ~/.tendermint_test_bak | |||
if cmn.FileExists(rootDir) { | |||
if err := os.Rename(rootDir, rootDir+"_bak"); err != nil { | |||
cmn.PanicSanity(err.Error()) | |||
} | |||
} | |||
// Create new dir | |||
if err := cmn.EnsureDir(rootDir, 0700); err != nil { | |||
cmn.PanicSanity(err.Error()) | |||
} | |||
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultConfigDir), 0700); err != nil { | |||
cmn.PanicSanity(err.Error()) | |||
} | |||
if err := cmn.EnsureDir(filepath.Join(rootDir, defaultDataDir), 0700); err != nil { | |||
cmn.PanicSanity(err.Error()) | |||
} | |||
baseConfig := DefaultBaseConfig() | |||
configFilePath := filepath.Join(rootDir, defaultConfigFilePath) | |||
genesisFilePath := filepath.Join(rootDir, baseConfig.Genesis) | |||
privFilePath := filepath.Join(rootDir, baseConfig.PrivValidator) | |||
// Write default config file if missing. | |||
if !cmn.FileExists(configFilePath) { | |||
writeDefaultConfigFile(configFilePath) | |||
} | |||
if !cmn.FileExists(genesisFilePath) { | |||
cmn.MustWriteFile(genesisFilePath, []byte(testGenesis), 0644) | |||
} | |||
// we always overwrite the priv val | |||
cmn.MustWriteFile(privFilePath, []byte(testPrivValidator), 0644) | |||
config := TestConfig().SetRoot(rootDir) | |||
return config | |||
} | |||
var testGenesis = `{ | |||
"genesis_time": "0001-01-01T00:00:00.000Z", | |||
"chain_id": "tendermint_test", | |||
"validators": [ | |||
{ | |||
"pub_key": { | |||
"type": "AC26791624DE60", | |||
"value":"AT/+aaL1eB0477Mud9JMm8Sh8BIvOYlPGC9KkIUmFaE=" | |||
}, | |||
"power": 10, | |||
"name": "" | |||
} | |||
], | |||
"app_hash": "" | |||
}` | |||
var testPrivValidator = `{ | |||
"address": "849CB2C877F87A20925F35D00AE6688342D25B47", | |||
"pub_key": { | |||
"type": "AC26791624DE60", | |||
"value": "AT/+aaL1eB0477Mud9JMm8Sh8BIvOYlPGC9KkIUmFaE=" | |||
}, | |||
"priv_key": { | |||
"type": "954568A3288910", | |||
"value": "EVkqJO/jIXp3rkASXfh9YnyToYXRXhBr6g9cQVxPFnQBP/5povV4HTjvsy530kybxKHwEi85iU8YL0qQhSYVoQ==" | |||
}, | |||
"last_height": 0, | |||
"last_round": 0, | |||
"last_step": 0 | |||
}` |
@ -1,94 +0,0 @@ | |||
package config | |||
import ( | |||
"io/ioutil" | |||
"os" | |||
"path/filepath" | |||
"strings" | |||
"testing" | |||
"github.com/stretchr/testify/assert" | |||
"github.com/stretchr/testify/require" | |||
) | |||
func ensureFiles(t *testing.T, rootDir string, files ...string) { | |||
for _, f := range files { | |||
p := rootify(rootDir, f) | |||
_, err := os.Stat(p) | |||
assert.Nil(t, err, p) | |||
} | |||
} | |||
func TestEnsureRoot(t *testing.T) { | |||
require := require.New(t) | |||
// setup temp dir for test | |||
tmpDir, err := ioutil.TempDir("", "config-test") | |||
require.Nil(err) | |||
defer os.RemoveAll(tmpDir) // nolint: errcheck | |||
// create root dir | |||
EnsureRoot(tmpDir) | |||
// make sure config is set properly | |||
data, err := ioutil.ReadFile(filepath.Join(tmpDir, defaultConfigFilePath)) | |||
require.Nil(err) | |||
if !checkConfig(string(data)) { | |||
t.Fatalf("config file missing some information") | |||
} | |||
ensureFiles(t, tmpDir, "data") | |||
} | |||
func TestEnsureTestRoot(t *testing.T) { | |||
require := require.New(t) | |||
testName := "ensureTestRoot" | |||
// create root dir | |||
cfg := ResetTestRoot(testName) | |||
rootDir := cfg.RootDir | |||
// make sure config is set properly | |||
data, err := ioutil.ReadFile(filepath.Join(rootDir, defaultConfigFilePath)) | |||
require.Nil(err) | |||
if !checkConfig(string(data)) { | |||
t.Fatalf("config file missing some information") | |||
} | |||
// TODO: make sure the cfg returned and testconfig are the same! | |||
baseConfig := DefaultBaseConfig() | |||
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidator) | |||
} | |||
func checkConfig(configFile string) bool { | |||
var valid bool | |||
// list of words we expect in the config | |||
var elems = []string{ | |||
"moniker", | |||
"seeds", | |||
"proxy_app", | |||
"fast_sync", | |||
"create_empty_blocks", | |||
"peer", | |||
"timeout", | |||
"broadcast", | |||
"send", | |||
"addr", | |||
"wal", | |||
"propose", | |||
"max", | |||
"genesis", | |||
} | |||
for _, e := range elems { | |||
if !strings.Contains(configFile, e) { | |||
valid = false | |||
} else { | |||
valid = true | |||
} | |||
} | |||
return valid | |||
} |
@ -1 +0,0 @@ | |||
See the [consensus spec](https://github.com/tendermint/tendermint/tree/master/docs/spec/consensus) and the [reactor consensus spec](https://github.com/tendermint/tendermint/tree/master/docs/spec/reactors/consensus) for more information. |
@ -1,267 +0,0 @@ | |||
package consensus | |||
import ( | |||
"context" | |||
"sync" | |||
"testing" | |||
"time" | |||
"github.com/stretchr/testify/require" | |||
"github.com/tendermint/tendermint/p2p" | |||
"github.com/tendermint/tendermint/types" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
func init() { | |||
config = ResetConfig("consensus_byzantine_test") | |||
} | |||
//---------------------------------------------- | |||
// byzantine failures | |||
// 4 validators. 1 is byzantine. The other three are partitioned into A (1 val) and B (2 vals). | |||
// byzantine validator sends conflicting proposals into A and B, | |||
// and prevotes/precommits on both of them. | |||
// B sees a commit, A doesn't. | |||
// Byzantine validator refuses to prevote. | |||
// Heal partition and ensure A sees the commit | |||
func TestByzantine(t *testing.T) { | |||
N := 4 | |||
logger := consensusLogger().With("test", "byzantine") | |||
css := randConsensusNet(N, "consensus_byzantine_test", newMockTickerFunc(false), newCounter) | |||
// give the byzantine validator a normal ticker | |||
ticker := NewTimeoutTicker() | |||
ticker.SetLogger(css[0].Logger) | |||
css[0].SetTimeoutTicker(ticker) | |||
switches := make([]*p2p.Switch, N) | |||
p2pLogger := logger.With("module", "p2p") | |||
for i := 0; i < N; i++ { | |||
switches[i] = p2p.NewSwitch(config.P2P) | |||
switches[i].SetLogger(p2pLogger.With("validator", i)) | |||
} | |||
eventChans := make([]chan interface{}, N) | |||
reactors := make([]p2p.Reactor, N) | |||
for i := 0; i < N; i++ { | |||
// make first val byzantine | |||
if i == 0 { | |||
// NOTE: Now, test validators are MockPV, which by default doesn't | |||
// do any safety checks. | |||
css[i].privValidator.(*types.MockPV).DisableChecks() | |||
css[i].decideProposal = func(j int) func(int64, int) { | |||
return func(height int64, round int) { | |||
byzantineDecideProposalFunc(t, height, round, css[j], switches[j]) | |||
} | |||
}(i) | |||
css[i].doPrevote = func(height int64, round int) {} | |||
} | |||
eventBus := css[i].eventBus | |||
eventBus.SetLogger(logger.With("module", "events", "validator", i)) | |||
eventChans[i] = make(chan interface{}, 1) | |||
err := eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, eventChans[i]) | |||
require.NoError(t, err) | |||
conR := NewConsensusReactor(css[i], true) // so we dont start the consensus states | |||
conR.SetLogger(logger.With("validator", i)) | |||
conR.SetEventBus(eventBus) | |||
var conRI p2p.Reactor // nolint: gotype, gosimple | |||
conRI = conR | |||
// make first val byzantine | |||
if i == 0 { | |||
conRI = NewByzantineReactor(conR) | |||
} | |||
reactors[i] = conRI | |||
} | |||
defer func() { | |||
for _, r := range reactors { | |||
if rr, ok := r.(*ByzantineReactor); ok { | |||
rr.reactor.Switch.Stop() | |||
} else { | |||
r.(*ConsensusReactor).Switch.Stop() | |||
} | |||
} | |||
}() | |||
p2p.MakeConnectedSwitches(config.P2P, N, func(i int, s *p2p.Switch) *p2p.Switch { | |||
// ignore new switch s, we already made ours | |||
switches[i].AddReactor("CONSENSUS", reactors[i]) | |||
return switches[i] | |||
}, func(sws []*p2p.Switch, i, j int) { | |||
// the network starts partitioned with globally active adversary | |||
if i != 0 { | |||
return | |||
} | |||
p2p.Connect2Switches(sws, i, j) | |||
}) | |||
// start the non-byz state machines. | |||
// note these must be started before the byz | |||
for i := 1; i < N; i++ { | |||
cr := reactors[i].(*ConsensusReactor) | |||
cr.SwitchToConsensus(cr.conS.GetState(), 0) | |||
} | |||
// start the byzantine state machine | |||
byzR := reactors[0].(*ByzantineReactor) | |||
s := byzR.reactor.conS.GetState() | |||
byzR.reactor.SwitchToConsensus(s, 0) | |||
// byz proposer sends one block to peers[0] | |||
// and the other block to peers[1] and peers[2]. | |||
// note peers and switches order don't match. | |||
peers := switches[0].Peers().List() | |||
// partition A | |||
ind0 := getSwitchIndex(switches, peers[0]) | |||
// partition B | |||
ind1 := getSwitchIndex(switches, peers[1]) | |||
ind2 := getSwitchIndex(switches, peers[2]) | |||
p2p.Connect2Switches(switches, ind1, ind2) | |||
// wait for someone in the big partition (B) to make a block | |||
<-eventChans[ind2] | |||
t.Log("A block has been committed. Healing partition") | |||
p2p.Connect2Switches(switches, ind0, ind1) | |||
p2p.Connect2Switches(switches, ind0, ind2) | |||
// wait till everyone makes the first new block | |||
// (one of them already has) | |||
wg := new(sync.WaitGroup) | |||
wg.Add(2) | |||
for i := 1; i < N-1; i++ { | |||
go func(j int) { | |||
<-eventChans[j] | |||
wg.Done() | |||
}(i) | |||
} | |||
done := make(chan struct{}) | |||
go func() { | |||
wg.Wait() | |||
close(done) | |||
}() | |||
tick := time.NewTicker(time.Second * 10) | |||
select { | |||
case <-done: | |||
case <-tick.C: | |||
for i, reactor := range reactors { | |||
t.Log(cmn.Fmt("Consensus Reactor %v", i)) | |||
t.Log(cmn.Fmt("%v", reactor)) | |||
} | |||
t.Fatalf("Timed out waiting for all validators to commit first block") | |||
} | |||
} | |||
//------------------------------- | |||
// byzantine consensus functions | |||
func byzantineDecideProposalFunc(t *testing.T, height int64, round int, cs *ConsensusState, sw *p2p.Switch) { | |||
// byzantine user should create two proposals and try to split the vote. | |||
// Avoid sending on internalMsgQueue and running consensus state. | |||
// Create a new proposal block from state/txs from the mempool. | |||
block1, blockParts1 := cs.createProposalBlock() | |||
polRound, polBlockID := cs.Votes.POLInfo() | |||
proposal1 := types.NewProposal(height, round, blockParts1.Header(), polRound, polBlockID) | |||
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal1); err != nil { | |||
t.Error(err) | |||
} | |||
// Create a new proposal block from state/txs from the mempool. | |||
block2, blockParts2 := cs.createProposalBlock() | |||
polRound, polBlockID = cs.Votes.POLInfo() | |||
proposal2 := types.NewProposal(height, round, blockParts2.Header(), polRound, polBlockID) | |||
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal2); err != nil { | |||
t.Error(err) | |||
} | |||
block1Hash := block1.Hash() | |||
block2Hash := block2.Hash() | |||
// broadcast conflicting proposals/block parts to peers | |||
peers := sw.Peers().List() | |||
t.Logf("Byzantine: broadcasting conflicting proposals to %d peers", len(peers)) | |||
for i, peer := range peers { | |||
if i < len(peers)/2 { | |||
go sendProposalAndParts(height, round, cs, peer, proposal1, block1Hash, blockParts1) | |||
} else { | |||
go sendProposalAndParts(height, round, cs, peer, proposal2, block2Hash, blockParts2) | |||
} | |||
} | |||
} | |||
func sendProposalAndParts(height int64, round int, cs *ConsensusState, peer p2p.Peer, proposal *types.Proposal, blockHash []byte, parts *types.PartSet) { | |||
// proposal | |||
msg := &ProposalMessage{Proposal: proposal} | |||
peer.Send(DataChannel, cdc.MustMarshalBinaryBare(msg)) | |||
// parts | |||
for i := 0; i < parts.Total(); i++ { | |||
part := parts.GetPart(i) | |||
msg := &BlockPartMessage{ | |||
Height: height, // This tells peer that this part applies to us. | |||
Round: round, // This tells peer that this part applies to us. | |||
Part: part, | |||
} | |||
peer.Send(DataChannel, cdc.MustMarshalBinaryBare(msg)) | |||
} | |||
// votes | |||
cs.mtx.Lock() | |||
prevote, _ := cs.signVote(types.VoteTypePrevote, blockHash, parts.Header()) | |||
precommit, _ := cs.signVote(types.VoteTypePrecommit, blockHash, parts.Header()) | |||
cs.mtx.Unlock() | |||
peer.Send(VoteChannel, cdc.MustMarshalBinaryBare(&VoteMessage{prevote})) | |||
peer.Send(VoteChannel, cdc.MustMarshalBinaryBare(&VoteMessage{precommit})) | |||
} | |||
//---------------------------------------- | |||
// byzantine consensus reactor | |||
type ByzantineReactor struct { | |||
cmn.Service | |||
reactor *ConsensusReactor | |||
} | |||
func NewByzantineReactor(conR *ConsensusReactor) *ByzantineReactor { | |||
return &ByzantineReactor{ | |||
Service: conR, | |||
reactor: conR, | |||
} | |||
} | |||
func (br *ByzantineReactor) SetSwitch(s *p2p.Switch) { br.reactor.SetSwitch(s) } | |||
func (br *ByzantineReactor) GetChannels() []*p2p.ChannelDescriptor { return br.reactor.GetChannels() } | |||
func (br *ByzantineReactor) AddPeer(peer p2p.Peer) { | |||
if !br.reactor.IsRunning() { | |||
return | |||
} | |||
// Create peerState for peer | |||
peerState := NewPeerState(peer).SetLogger(br.reactor.Logger) | |||
peer.Set(types.PeerStateKey, peerState) | |||
// Send our state to peer. | |||
// If we're fast_syncing, broadcast a RoundStepMessage later upon SwitchToConsensus(). | |||
if !br.reactor.fastSync { | |||
br.reactor.sendNewRoundStepMessages(peer) | |||
} | |||
} | |||
func (br *ByzantineReactor) RemovePeer(peer p2p.Peer, reason interface{}) { | |||
br.reactor.RemovePeer(peer, reason) | |||
} | |||
func (br *ByzantineReactor) Receive(chID byte, peer p2p.Peer, msgBytes []byte) { | |||
br.reactor.Receive(chID, peer, msgBytes) | |||
} |
@ -1,495 +0,0 @@ | |||
package consensus | |||
import ( | |||
"bytes" | |||
"context" | |||
"fmt" | |||
"io/ioutil" | |||
"os" | |||
"path" | |||
"sort" | |||
"sync" | |||
"testing" | |||
"time" | |||
abcicli "github.com/tendermint/abci/client" | |||
abci "github.com/tendermint/abci/types" | |||
bc "github.com/tendermint/tendermint/blockchain" | |||
cfg "github.com/tendermint/tendermint/config" | |||
cstypes "github.com/tendermint/tendermint/consensus/types" | |||
mempl "github.com/tendermint/tendermint/mempool" | |||
"github.com/tendermint/tendermint/p2p" | |||
"github.com/tendermint/tendermint/privval" | |||
sm "github.com/tendermint/tendermint/state" | |||
"github.com/tendermint/tendermint/types" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
dbm "github.com/tendermint/tmlibs/db" | |||
"github.com/tendermint/tmlibs/log" | |||
"github.com/tendermint/abci/example/counter" | |||
"github.com/tendermint/abci/example/kvstore" | |||
"github.com/go-kit/kit/log/term" | |||
) | |||
const ( | |||
testSubscriber = "test-client" | |||
) | |||
// genesis, chain_id, priv_val | |||
var config *cfg.Config // NOTE: must be reset for each _test.go file | |||
var ensureTimeout = time.Second * 1 // must be in seconds because CreateEmptyBlocksInterval is | |||
func ensureDir(dir string, mode os.FileMode) { | |||
if err := cmn.EnsureDir(dir, mode); err != nil { | |||
panic(err) | |||
} | |||
} | |||
func ResetConfig(name string) *cfg.Config { | |||
return cfg.ResetTestRoot(name) | |||
} | |||
//------------------------------------------------------------------------------- | |||
// validator stub (a kvstore consensus peer we control) | |||
type validatorStub struct { | |||
Index int // Validator index. NOTE: we don't assume validator set changes. | |||
Height int64 | |||
Round int | |||
types.PrivValidator | |||
} | |||
var testMinPower int64 = 10 | |||
func NewValidatorStub(privValidator types.PrivValidator, valIndex int) *validatorStub { | |||
return &validatorStub{ | |||
Index: valIndex, | |||
PrivValidator: privValidator, | |||
} | |||
} | |||
func (vs *validatorStub) signVote(voteType byte, hash []byte, header types.PartSetHeader) (*types.Vote, error) { | |||
vote := &types.Vote{ | |||
ValidatorIndex: vs.Index, | |||
ValidatorAddress: vs.PrivValidator.GetAddress(), | |||
Height: vs.Height, | |||
Round: vs.Round, | |||
Timestamp: time.Now().UTC(), | |||
Type: voteType, | |||
BlockID: types.BlockID{hash, header}, | |||
} | |||
err := vs.PrivValidator.SignVote(config.ChainID(), vote) | |||
return vote, err | |||
} | |||
// Sign vote for type/hash/header | |||
func signVote(vs *validatorStub, voteType byte, hash []byte, header types.PartSetHeader) *types.Vote { | |||
v, err := vs.signVote(voteType, hash, header) | |||
if err != nil { | |||
panic(fmt.Errorf("failed to sign vote: %v", err)) | |||
} | |||
return v | |||
} | |||
func signVotes(voteType byte, hash []byte, header types.PartSetHeader, vss ...*validatorStub) []*types.Vote { | |||
votes := make([]*types.Vote, len(vss)) | |||
for i, vs := range vss { | |||
votes[i] = signVote(vs, voteType, hash, header) | |||
} | |||
return votes | |||
} | |||
func incrementHeight(vss ...*validatorStub) { | |||
for _, vs := range vss { | |||
vs.Height++ | |||
} | |||
} | |||
func incrementRound(vss ...*validatorStub) { | |||
for _, vs := range vss { | |||
vs.Round++ | |||
} | |||
} | |||
//------------------------------------------------------------------------------- | |||
// Functions for transitioning the consensus state | |||
func startTestRound(cs *ConsensusState, height int64, round int) { | |||
cs.enterNewRound(height, round) | |||
cs.startRoutines(0) | |||
} | |||
// Create proposal block from cs1 but sign it with vs | |||
func decideProposal(cs1 *ConsensusState, vs *validatorStub, height int64, round int) (proposal *types.Proposal, block *types.Block) { | |||
block, blockParts := cs1.createProposalBlock() | |||
if block == nil { // on error | |||
panic("error creating proposal block") | |||
} | |||
// Make proposal | |||
polRound, polBlockID := cs1.Votes.POLInfo() | |||
proposal = types.NewProposal(height, round, blockParts.Header(), polRound, polBlockID) | |||
if err := vs.SignProposal(cs1.state.ChainID, proposal); err != nil { | |||
panic(err) | |||
} | |||
return | |||
} | |||
func addVotes(to *ConsensusState, votes ...*types.Vote) { | |||
for _, vote := range votes { | |||
to.peerMsgQueue <- msgInfo{Msg: &VoteMessage{vote}} | |||
} | |||
} | |||
func signAddVotes(to *ConsensusState, voteType byte, hash []byte, header types.PartSetHeader, vss ...*validatorStub) { | |||
votes := signVotes(voteType, hash, header, vss...) | |||
addVotes(to, votes...) | |||
} | |||
func validatePrevote(t *testing.T, cs *ConsensusState, round int, privVal *validatorStub, blockHash []byte) { | |||
prevotes := cs.Votes.Prevotes(round) | |||
var vote *types.Vote | |||
if vote = prevotes.GetByAddress(privVal.GetAddress()); vote == nil { | |||
panic("Failed to find prevote from validator") | |||
} | |||
if blockHash == nil { | |||
if vote.BlockID.Hash != nil { | |||
panic(fmt.Sprintf("Expected prevote to be for nil, got %X", vote.BlockID.Hash)) | |||
} | |||
} else { | |||
if !bytes.Equal(vote.BlockID.Hash, blockHash) { | |||
panic(fmt.Sprintf("Expected prevote to be for %X, got %X", blockHash, vote.BlockID.Hash)) | |||
} | |||
} | |||
} | |||
func validateLastPrecommit(t *testing.T, cs *ConsensusState, privVal *validatorStub, blockHash []byte) { | |||
votes := cs.LastCommit | |||
var vote *types.Vote | |||
if vote = votes.GetByAddress(privVal.GetAddress()); vote == nil { | |||
panic("Failed to find precommit from validator") | |||
} | |||
if !bytes.Equal(vote.BlockID.Hash, blockHash) { | |||
panic(fmt.Sprintf("Expected precommit to be for %X, got %X", blockHash, vote.BlockID.Hash)) | |||
} | |||
} | |||
func validatePrecommit(t *testing.T, cs *ConsensusState, thisRound, lockRound int, privVal *validatorStub, votedBlockHash, lockedBlockHash []byte) { | |||
precommits := cs.Votes.Precommits(thisRound) | |||
var vote *types.Vote | |||
if vote = precommits.GetByAddress(privVal.GetAddress()); vote == nil { | |||
panic("Failed to find precommit from validator") | |||
} | |||
if votedBlockHash == nil { | |||
if vote.BlockID.Hash != nil { | |||
panic("Expected precommit to be for nil") | |||
} | |||
} else { | |||
if !bytes.Equal(vote.BlockID.Hash, votedBlockHash) { | |||
panic("Expected precommit to be for proposal block") | |||
} | |||
} | |||
if lockedBlockHash == nil { | |||
if cs.LockedRound != lockRound || cs.LockedBlock != nil { | |||
panic(fmt.Sprintf("Expected to be locked on nil at round %d. Got locked at round %d with block %v", lockRound, cs.LockedRound, cs.LockedBlock)) | |||
} | |||
} else { | |||
if cs.LockedRound != lockRound || !bytes.Equal(cs.LockedBlock.Hash(), lockedBlockHash) { | |||
panic(fmt.Sprintf("Expected block to be locked on round %d, got %d. Got locked block %X, expected %X", lockRound, cs.LockedRound, cs.LockedBlock.Hash(), lockedBlockHash)) | |||
} | |||
} | |||
} | |||
func validatePrevoteAndPrecommit(t *testing.T, cs *ConsensusState, thisRound, lockRound int, privVal *validatorStub, votedBlockHash, lockedBlockHash []byte) { | |||
// verify the prevote | |||
validatePrevote(t, cs, thisRound, privVal, votedBlockHash) | |||
// verify precommit | |||
cs.mtx.Lock() | |||
validatePrecommit(t, cs, thisRound, lockRound, privVal, votedBlockHash, lockedBlockHash) | |||
cs.mtx.Unlock() | |||
} | |||
// genesis | |||
func subscribeToVoter(cs *ConsensusState, addr []byte) chan interface{} { | |||
voteCh0 := make(chan interface{}) | |||
err := cs.eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryVote, voteCh0) | |||
if err != nil { | |||
panic(fmt.Sprintf("failed to subscribe %s to %v", testSubscriber, types.EventQueryVote)) | |||
} | |||
voteCh := make(chan interface{}) | |||
go func() { | |||
for v := range voteCh0 { | |||
vote := v.(types.EventDataVote) | |||
// we only fire for our own votes | |||
if bytes.Equal(addr, vote.Vote.ValidatorAddress) { | |||
voteCh <- v | |||
} | |||
} | |||
}() | |||
return voteCh | |||
} | |||
//------------------------------------------------------------------------------- | |||
// consensus states | |||
func newConsensusState(state sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState { | |||
return newConsensusStateWithConfig(config, state, pv, app) | |||
} | |||
func newConsensusStateWithConfig(thisConfig *cfg.Config, state sm.State, pv types.PrivValidator, app abci.Application) *ConsensusState { | |||
blockDB := dbm.NewMemDB() | |||
return newConsensusStateWithConfigAndBlockStore(thisConfig, state, pv, app, blockDB) | |||
} | |||
func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state sm.State, pv types.PrivValidator, app abci.Application, blockDB dbm.DB) *ConsensusState { | |||
// Get BlockStore | |||
blockStore := bc.NewBlockStore(blockDB) | |||
// one for mempool, one for consensus | |||
mtx := new(sync.Mutex) | |||
proxyAppConnMem := abcicli.NewLocalClient(mtx, app) | |||
proxyAppConnCon := abcicli.NewLocalClient(mtx, app) | |||
// Make Mempool | |||
mempool := mempl.NewMempool(thisConfig.Mempool, proxyAppConnMem, 0) | |||
mempool.SetLogger(log.TestingLogger().With("module", "mempool")) | |||
if thisConfig.Consensus.WaitForTxs() { | |||
mempool.EnableTxsAvailable() | |||
} | |||
// mock the evidence pool | |||
evpool := sm.MockEvidencePool{} | |||
// Make ConsensusState | |||
stateDB := dbm.NewMemDB() | |||
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyAppConnCon, mempool, evpool) | |||
cs := NewConsensusState(thisConfig.Consensus, state, blockExec, blockStore, mempool, evpool) | |||
cs.SetLogger(log.TestingLogger().With("module", "consensus")) | |||
cs.SetPrivValidator(pv) | |||
eventBus := types.NewEventBus() | |||
eventBus.SetLogger(log.TestingLogger().With("module", "events")) | |||
eventBus.Start() | |||
cs.SetEventBus(eventBus) | |||
return cs | |||
} | |||
func loadPrivValidator(config *cfg.Config) *privval.FilePV { | |||
privValidatorFile := config.PrivValidatorFile() | |||
ensureDir(path.Dir(privValidatorFile), 0700) | |||
privValidator := privval.LoadOrGenFilePV(privValidatorFile) | |||
privValidator.Reset() | |||
return privValidator | |||
} | |||
func randConsensusState(nValidators int) (*ConsensusState, []*validatorStub) { | |||
// Get State | |||
state, privVals := randGenesisState(nValidators, false, 10) | |||
vss := make([]*validatorStub, nValidators) | |||
cs := newConsensusState(state, privVals[0], counter.NewCounterApplication(true)) | |||
for i := 0; i < nValidators; i++ { | |||
vss[i] = NewValidatorStub(privVals[i], i) | |||
} | |||
// since cs1 starts at 1 | |||
incrementHeight(vss[1:]...) | |||
return cs, vss | |||
} | |||
//------------------------------------------------------------------------------- | |||
func ensureNoNewStep(stepCh <-chan interface{}) { | |||
timer := time.NewTimer(ensureTimeout) | |||
select { | |||
case <-timer.C: | |||
break | |||
case <-stepCh: | |||
panic("We should be stuck waiting, not moving to the next step") | |||
} | |||
} | |||
func ensureNewStep(stepCh <-chan interface{}) { | |||
timer := time.NewTimer(ensureTimeout) | |||
select { | |||
case <-timer.C: | |||
panic("We shouldnt be stuck waiting") | |||
case <-stepCh: | |||
break | |||
} | |||
} | |||
//------------------------------------------------------------------------------- | |||
// consensus nets | |||
// consensusLogger is a TestingLogger which uses a different | |||
// color for each validator ("validator" key must exist). | |||
func consensusLogger() log.Logger { | |||
return log.TestingLoggerWithColorFn(func(keyvals ...interface{}) term.FgBgColor { | |||
for i := 0; i < len(keyvals)-1; i += 2 { | |||
if keyvals[i] == "validator" { | |||
return term.FgBgColor{Fg: term.Color(uint8(keyvals[i+1].(int) + 1))} | |||
} | |||
} | |||
return term.FgBgColor{} | |||
}).With("module", "consensus") | |||
} | |||
func randConsensusNet(nValidators int, testName string, tickerFunc func() TimeoutTicker, appFunc func() abci.Application, configOpts ...func(*cfg.Config)) []*ConsensusState { | |||
genDoc, privVals := randGenesisDoc(nValidators, false, 30) | |||
css := make([]*ConsensusState, nValidators) | |||
logger := consensusLogger() | |||
for i := 0; i < nValidators; i++ { | |||
stateDB := dbm.NewMemDB() // each state needs its own db | |||
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc) | |||
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i)) | |||
for _, opt := range configOpts { | |||
opt(thisConfig) | |||
} | |||
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal | |||
app := appFunc() | |||
vals := types.TM2PB.Validators(state.Validators) | |||
app.InitChain(abci.RequestInitChain{Validators: vals}) | |||
css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], app) | |||
css[i].SetTimeoutTicker(tickerFunc()) | |||
css[i].SetLogger(logger.With("validator", i, "module", "consensus")) | |||
} | |||
return css | |||
} | |||
// nPeers = nValidators + nNotValidator | |||
func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerFunc func() TimeoutTicker, appFunc func() abci.Application) []*ConsensusState { | |||
genDoc, privVals := randGenesisDoc(nValidators, false, testMinPower) | |||
css := make([]*ConsensusState, nPeers) | |||
logger := consensusLogger() | |||
for i := 0; i < nPeers; i++ { | |||
stateDB := dbm.NewMemDB() // each state needs its own db | |||
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc) | |||
thisConfig := ResetConfig(cmn.Fmt("%s_%d", testName, i)) | |||
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal | |||
var privVal types.PrivValidator | |||
if i < nValidators { | |||
privVal = privVals[i] | |||
} else { | |||
_, tempFilePath := cmn.Tempfile("priv_validator_") | |||
privVal = privval.GenFilePV(tempFilePath) | |||
} | |||
app := appFunc() | |||
vals := types.TM2PB.Validators(state.Validators) | |||
app.InitChain(abci.RequestInitChain{Validators: vals}) | |||
css[i] = newConsensusStateWithConfig(thisConfig, state, privVal, app) | |||
css[i].SetTimeoutTicker(tickerFunc()) | |||
css[i].SetLogger(logger.With("validator", i, "module", "consensus")) | |||
} | |||
return css | |||
} | |||
func getSwitchIndex(switches []*p2p.Switch, peer p2p.Peer) int { | |||
for i, s := range switches { | |||
if peer.NodeInfo().ID == s.NodeInfo().ID { | |||
return i | |||
} | |||
} | |||
panic("didnt find peer in switches") | |||
return -1 | |||
} | |||
//------------------------------------------------------------------------------- | |||
// genesis | |||
func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.GenesisDoc, []types.PrivValidator) { | |||
validators := make([]types.GenesisValidator, numValidators) | |||
privValidators := make([]types.PrivValidator, numValidators) | |||
for i := 0; i < numValidators; i++ { | |||
val, privVal := types.RandValidator(randPower, minPower) | |||
validators[i] = types.GenesisValidator{ | |||
PubKey: val.PubKey, | |||
Power: val.VotingPower, | |||
} | |||
privValidators[i] = privVal | |||
} | |||
sort.Sort(types.PrivValidatorsByAddress(privValidators)) | |||
return &types.GenesisDoc{ | |||
GenesisTime: time.Now(), | |||
ChainID: config.ChainID(), | |||
Validators: validators, | |||
}, privValidators | |||
} | |||
func randGenesisState(numValidators int, randPower bool, minPower int64) (sm.State, []types.PrivValidator) { | |||
genDoc, privValidators := randGenesisDoc(numValidators, randPower, minPower) | |||
s0, _ := sm.MakeGenesisState(genDoc) | |||
db := dbm.NewMemDB() | |||
sm.SaveState(db, s0) | |||
return s0, privValidators | |||
} | |||
//------------------------------------ | |||
// mock ticker | |||
func newMockTickerFunc(onlyOnce bool) func() TimeoutTicker { | |||
return func() TimeoutTicker { | |||
return &mockTicker{ | |||
c: make(chan timeoutInfo, 10), | |||
onlyOnce: onlyOnce, | |||
} | |||
} | |||
} | |||
// mock ticker only fires on RoundStepNewHeight | |||
// and only once if onlyOnce=true | |||
type mockTicker struct { | |||
c chan timeoutInfo | |||
mtx sync.Mutex | |||
onlyOnce bool | |||
fired bool | |||
} | |||
func (m *mockTicker) Start() error { | |||
return nil | |||
} | |||
func (m *mockTicker) Stop() error { | |||
return nil | |||
} | |||
func (m *mockTicker) ScheduleTimeout(ti timeoutInfo) { | |||
m.mtx.Lock() | |||
defer m.mtx.Unlock() | |||
if m.onlyOnce && m.fired { | |||
return | |||
} | |||
if ti.Step == cstypes.RoundStepNewHeight { | |||
m.c <- ti | |||
m.fired = true | |||
} | |||
} | |||
func (m *mockTicker) Chan() <-chan timeoutInfo { | |||
return m.c | |||
} | |||
func (mockTicker) SetLogger(log.Logger) { | |||
} | |||
//------------------------------------ | |||
func newCounter() abci.Application { | |||
return counter.NewCounterApplication(true) | |||
} | |||
func newPersistentKVStore() abci.Application { | |||
dir, _ := ioutil.TempDir("/tmp", "persistent-kvstore") | |||
return kvstore.NewPersistentKVStoreApplication(dir) | |||
} |
@ -1,232 +0,0 @@ | |||
package consensus | |||
import ( | |||
"encoding/binary" | |||
"fmt" | |||
"testing" | |||
"time" | |||
"github.com/stretchr/testify/assert" | |||
"github.com/tendermint/abci/example/code" | |||
abci "github.com/tendermint/abci/types" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
"github.com/tendermint/tendermint/types" | |||
) | |||
func init() { | |||
config = ResetConfig("consensus_mempool_test") | |||
} | |||
func TestMempoolNoProgressUntilTxsAvailable(t *testing.T) { | |||
config := ResetConfig("consensus_mempool_txs_available_test") | |||
config.Consensus.CreateEmptyBlocks = false | |||
state, privVals := randGenesisState(1, false, 10) | |||
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication()) | |||
cs.mempool.EnableTxsAvailable() | |||
height, round := cs.Height, cs.Round | |||
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock) | |||
startTestRound(cs, height, round) | |||
ensureNewStep(newBlockCh) // first block gets committed | |||
ensureNoNewStep(newBlockCh) | |||
deliverTxsRange(cs, 0, 1) | |||
ensureNewStep(newBlockCh) // commit txs | |||
ensureNewStep(newBlockCh) // commit updated app hash | |||
ensureNoNewStep(newBlockCh) | |||
} | |||
func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) { | |||
config := ResetConfig("consensus_mempool_txs_available_test") | |||
config.Consensus.CreateEmptyBlocksInterval = int(ensureTimeout.Seconds()) | |||
state, privVals := randGenesisState(1, false, 10) | |||
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication()) | |||
cs.mempool.EnableTxsAvailable() | |||
height, round := cs.Height, cs.Round | |||
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock) | |||
startTestRound(cs, height, round) | |||
ensureNewStep(newBlockCh) // first block gets committed | |||
ensureNoNewStep(newBlockCh) // then we dont make a block ... | |||
ensureNewStep(newBlockCh) // until the CreateEmptyBlocksInterval has passed | |||
} | |||
func TestMempoolProgressInHigherRound(t *testing.T) { | |||
config := ResetConfig("consensus_mempool_txs_available_test") | |||
config.Consensus.CreateEmptyBlocks = false | |||
state, privVals := randGenesisState(1, false, 10) | |||
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication()) | |||
cs.mempool.EnableTxsAvailable() | |||
height, round := cs.Height, cs.Round | |||
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock) | |||
newRoundCh := subscribe(cs.eventBus, types.EventQueryNewRound) | |||
timeoutCh := subscribe(cs.eventBus, types.EventQueryTimeoutPropose) | |||
cs.setProposal = func(proposal *types.Proposal) error { | |||
if cs.Height == 2 && cs.Round == 0 { | |||
// dont set the proposal in round 0 so we timeout and | |||
// go to next round | |||
cs.Logger.Info("Ignoring set proposal at height 2, round 0") | |||
return nil | |||
} | |||
return cs.defaultSetProposal(proposal) | |||
} | |||
startTestRound(cs, height, round) | |||
ensureNewStep(newRoundCh) // first round at first height | |||
ensureNewStep(newBlockCh) // first block gets committed | |||
ensureNewStep(newRoundCh) // first round at next height | |||
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round | |||
<-timeoutCh | |||
ensureNewStep(newRoundCh) // wait for the next round | |||
ensureNewStep(newBlockCh) // now we can commit the block | |||
} | |||
func deliverTxsRange(cs *ConsensusState, start, end int) { | |||
// Deliver some txs. | |||
for i := start; i < end; i++ { | |||
txBytes := make([]byte, 8) | |||
binary.BigEndian.PutUint64(txBytes, uint64(i)) | |||
err := cs.mempool.CheckTx(txBytes, nil) | |||
if err != nil { | |||
panic(cmn.Fmt("Error after CheckTx: %v", err)) | |||
} | |||
} | |||
} | |||
func TestMempoolTxConcurrentWithCommit(t *testing.T) { | |||
state, privVals := randGenesisState(1, false, 10) | |||
cs := newConsensusState(state, privVals[0], NewCounterApplication()) | |||
height, round := cs.Height, cs.Round | |||
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock) | |||
NTxs := 10000 | |||
go deliverTxsRange(cs, 0, NTxs) | |||
startTestRound(cs, height, round) | |||
for nTxs := 0; nTxs < NTxs; { | |||
ticker := time.NewTicker(time.Second * 30) | |||
select { | |||
case b := <-newBlockCh: | |||
evt := b.(types.EventDataNewBlock) | |||
nTxs += int(evt.Block.Header.NumTxs) | |||
case <-ticker.C: | |||
panic("Timed out waiting to commit blocks with transactions") | |||
} | |||
} | |||
} | |||
func TestMempoolRmBadTx(t *testing.T) { | |||
state, privVals := randGenesisState(1, false, 10) | |||
app := NewCounterApplication() | |||
cs := newConsensusState(state, privVals[0], app) | |||
// increment the counter by 1 | |||
txBytes := make([]byte, 8) | |||
binary.BigEndian.PutUint64(txBytes, uint64(0)) | |||
resDeliver := app.DeliverTx(txBytes) | |||
assert.False(t, resDeliver.IsErr(), cmn.Fmt("expected no error. got %v", resDeliver)) | |||
resCommit := app.Commit() | |||
assert.True(t, len(resCommit.Data) > 0) | |||
emptyMempoolCh := make(chan struct{}) | |||
checkTxRespCh := make(chan struct{}) | |||
go func() { | |||
// Try to send the tx through the mempool. | |||
// CheckTx should not err, but the app should return a bad abci code | |||
// and the tx should get removed from the pool | |||
err := cs.mempool.CheckTx(txBytes, func(r *abci.Response) { | |||
if r.GetCheckTx().Code != code.CodeTypeBadNonce { | |||
t.Fatalf("expected checktx to return bad nonce, got %v", r) | |||
} | |||
checkTxRespCh <- struct{}{} | |||
}) | |||
if err != nil { | |||
t.Fatalf("Error after CheckTx: %v", err) | |||
} | |||
// check for the tx | |||
for { | |||
txs := cs.mempool.Reap(1) | |||
if len(txs) == 0 { | |||
emptyMempoolCh <- struct{}{} | |||
return | |||
} | |||
time.Sleep(10 * time.Millisecond) | |||
} | |||
}() | |||
// Wait until the tx returns | |||
ticker := time.After(time.Second * 5) | |||
select { | |||
case <-checkTxRespCh: | |||
// success | |||
case <-ticker: | |||
t.Fatalf("Timed out waiting for tx to return") | |||
} | |||
// Wait until the tx is removed | |||
ticker = time.After(time.Second * 5) | |||
select { | |||
case <-emptyMempoolCh: | |||
// success | |||
case <-ticker: | |||
t.Fatalf("Timed out waiting for tx to be removed") | |||
} | |||
} | |||
// CounterApplication that maintains a mempool state and resets it upon commit | |||
type CounterApplication struct { | |||
abci.BaseApplication | |||
txCount int | |||
mempoolTxCount int | |||
} | |||
func NewCounterApplication() *CounterApplication { | |||
return &CounterApplication{} | |||
} | |||
func (app *CounterApplication) Info(req abci.RequestInfo) abci.ResponseInfo { | |||
return abci.ResponseInfo{Data: cmn.Fmt("txs:%v", app.txCount)} | |||
} | |||
func (app *CounterApplication) DeliverTx(tx []byte) abci.ResponseDeliverTx { | |||
txValue := txAsUint64(tx) | |||
if txValue != uint64(app.txCount) { | |||
return abci.ResponseDeliverTx{ | |||
Code: code.CodeTypeBadNonce, | |||
Log: fmt.Sprintf("Invalid nonce. Expected %v, got %v", app.txCount, txValue)} | |||
} | |||
app.txCount++ | |||
return abci.ResponseDeliverTx{Code: code.CodeTypeOK} | |||
} | |||
func (app *CounterApplication) CheckTx(tx []byte) abci.ResponseCheckTx { | |||
txValue := txAsUint64(tx) | |||
if txValue != uint64(app.mempoolTxCount) { | |||
return abci.ResponseCheckTx{ | |||
Code: code.CodeTypeBadNonce, | |||
Log: fmt.Sprintf("Invalid nonce. Expected %v, got %v", app.mempoolTxCount, txValue)} | |||
} | |||
app.mempoolTxCount++ | |||
return abci.ResponseCheckTx{Code: code.CodeTypeOK} | |||
} | |||
func txAsUint64(tx []byte) uint64 { | |||
tx8 := make([]byte, 8) | |||
copy(tx8[len(tx8)-len(tx):], tx) | |||
return binary.BigEndian.Uint64(tx8) | |||
} | |||
func (app *CounterApplication) Commit() abci.ResponseCommit { | |||
app.mempoolTxCount = app.txCount | |||
if app.txCount == 0 { | |||
return abci.ResponseCommit{} | |||
} | |||
hash := make([]byte, 8) | |||
binary.BigEndian.PutUint64(hash, uint64(app.txCount)) | |||
return abci.ResponseCommit{Data: hash} | |||
} |
@ -1,133 +0,0 @@ | |||
package consensus | |||
import ( | |||
"github.com/go-kit/kit/metrics" | |||
"github.com/go-kit/kit/metrics/discard" | |||
prometheus "github.com/go-kit/kit/metrics/prometheus" | |||
stdprometheus "github.com/prometheus/client_golang/prometheus" | |||
) | |||
// Metrics contains metrics exposed by this package. | |||
type Metrics struct { | |||
// Height of the chain. | |||
Height metrics.Gauge | |||
// Number of rounds. | |||
Rounds metrics.Gauge | |||
// Number of validators. | |||
Validators metrics.Gauge | |||
// Total power of all validators. | |||
ValidatorsPower metrics.Gauge | |||
// Number of validators who did not sign. | |||
MissingValidators metrics.Gauge | |||
// Total power of the missing validators. | |||
MissingValidatorsPower metrics.Gauge | |||
// Number of validators who tried to double sign. | |||
ByzantineValidators metrics.Gauge | |||
// Total power of the byzantine validators. | |||
ByzantineValidatorsPower metrics.Gauge | |||
// Time between this and the last block. | |||
BlockIntervalSeconds metrics.Histogram | |||
// Number of transactions. | |||
NumTxs metrics.Gauge | |||
// Size of the block. | |||
BlockSizeBytes metrics.Gauge | |||
// Total number of transactions. | |||
TotalTxs metrics.Gauge | |||
} | |||
// PrometheusMetrics returns Metrics build using Prometheus client library. | |||
func PrometheusMetrics() *Metrics { | |||
return &Metrics{ | |||
Height: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{ | |||
Subsystem: "consensus", | |||
Name: "height", | |||
Help: "Height of the chain.", | |||
}, []string{}), | |||
Rounds: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{ | |||
Subsystem: "consensus", | |||
Name: "rounds", | |||
Help: "Number of rounds.", | |||
}, []string{}), | |||
Validators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{ | |||
Subsystem: "consensus", | |||
Name: "validators", | |||
Help: "Number of validators.", | |||
}, []string{}), | |||
ValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{ | |||
Subsystem: "consensus", | |||
Name: "validators_power", | |||
Help: "Total power of all validators.", | |||
}, []string{}), | |||
MissingValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{ | |||
Subsystem: "consensus", | |||
Name: "missing_validators", | |||
Help: "Number of validators who did not sign.", | |||
}, []string{}), | |||
MissingValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{ | |||
Subsystem: "consensus", | |||
Name: "missing_validators_power", | |||
Help: "Total power of the missing validators.", | |||
}, []string{}), | |||
ByzantineValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{ | |||
Subsystem: "consensus", | |||
Name: "byzantine_validators", | |||
Help: "Number of validators who tried to double sign.", | |||
}, []string{}), | |||
ByzantineValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{ | |||
Subsystem: "consensus", | |||
Name: "byzantine_validators_power", | |||
Help: "Total power of the byzantine validators.", | |||
}, []string{}), | |||
BlockIntervalSeconds: prometheus.NewHistogramFrom(stdprometheus.HistogramOpts{ | |||
Subsystem: "consensus", | |||
Name: "block_interval_seconds", | |||
Help: "Time between this and the last block.", | |||
Buckets: []float64{1, 2.5, 5, 10, 60}, | |||
}, []string{}), | |||
NumTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{ | |||
Subsystem: "consensus", | |||
Name: "num_txs", | |||
Help: "Number of transactions.", | |||
}, []string{}), | |||
BlockSizeBytes: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{ | |||
Subsystem: "consensus", | |||
Name: "block_size_bytes", | |||
Help: "Size of the block.", | |||
}, []string{}), | |||
TotalTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{ | |||
Subsystem: "consensus", | |||
Name: "total_txs", | |||
Help: "Total number of transactions.", | |||
}, []string{}), | |||
} | |||
} | |||
// NopMetrics returns no-op Metrics. | |||
func NopMetrics() *Metrics { | |||
return &Metrics{ | |||
Height: discard.NewGauge(), | |||
Rounds: discard.NewGauge(), | |||
Validators: discard.NewGauge(), | |||
ValidatorsPower: discard.NewGauge(), | |||
MissingValidators: discard.NewGauge(), | |||
MissingValidatorsPower: discard.NewGauge(), | |||
ByzantineValidators: discard.NewGauge(), | |||
ByzantineValidatorsPower: discard.NewGauge(), | |||
BlockIntervalSeconds: discard.NewHistogram(), | |||
NumTxs: discard.NewGauge(), | |||
BlockSizeBytes: discard.NewGauge(), | |||
TotalTxs: discard.NewGauge(), | |||
} | |||
} |
@ -1,538 +0,0 @@ | |||
package consensus | |||
import ( | |||
"context" | |||
"fmt" | |||
"os" | |||
"runtime" | |||
"runtime/pprof" | |||
"sync" | |||
"testing" | |||
"time" | |||
"github.com/tendermint/abci/example/kvstore" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
"github.com/tendermint/tmlibs/log" | |||
cfg "github.com/tendermint/tendermint/config" | |||
"github.com/tendermint/tendermint/p2p" | |||
p2pdummy "github.com/tendermint/tendermint/p2p/dummy" | |||
"github.com/tendermint/tendermint/types" | |||
"github.com/stretchr/testify/assert" | |||
"github.com/stretchr/testify/require" | |||
) | |||
func init() { | |||
config = ResetConfig("consensus_reactor_test") | |||
} | |||
//---------------------------------------------- | |||
// in-process testnets | |||
func startConsensusNet(t *testing.T, css []*ConsensusState, N int) ([]*ConsensusReactor, []chan interface{}, []*types.EventBus) { | |||
reactors := make([]*ConsensusReactor, N) | |||
eventChans := make([]chan interface{}, N) | |||
eventBuses := make([]*types.EventBus, N) | |||
for i := 0; i < N; i++ { | |||
/*logger, err := tmflags.ParseLogLevel("consensus:info,*:error", logger, "info") | |||
if err != nil { t.Fatal(err)}*/ | |||
reactors[i] = NewConsensusReactor(css[i], true) // so we dont start the consensus states | |||
reactors[i].SetLogger(css[i].Logger) | |||
// eventBus is already started with the cs | |||
eventBuses[i] = css[i].eventBus | |||
reactors[i].SetEventBus(eventBuses[i]) | |||
eventChans[i] = make(chan interface{}, 1) | |||
err := eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, eventChans[i]) | |||
require.NoError(t, err) | |||
} | |||
// make connected switches and start all reactors | |||
p2p.MakeConnectedSwitches(config.P2P, N, func(i int, s *p2p.Switch) *p2p.Switch { | |||
s.AddReactor("CONSENSUS", reactors[i]) | |||
s.SetLogger(reactors[i].conS.Logger.With("module", "p2p")) | |||
return s | |||
}, p2p.Connect2Switches) | |||
// now that everyone is connected, start the state machines | |||
// If we started the state machines before everyone was connected, | |||
// we'd block when the cs fires NewBlockEvent and the peers are trying to start their reactors | |||
// TODO: is this still true with new pubsub? | |||
for i := 0; i < N; i++ { | |||
s := reactors[i].conS.GetState() | |||
reactors[i].SwitchToConsensus(s, 0) | |||
} | |||
return reactors, eventChans, eventBuses | |||
} | |||
func stopConsensusNet(logger log.Logger, reactors []*ConsensusReactor, eventBuses []*types.EventBus) { | |||
logger.Info("stopConsensusNet", "n", len(reactors)) | |||
for i, r := range reactors { | |||
logger.Info("stopConsensusNet: Stopping ConsensusReactor", "i", i) | |||
r.Switch.Stop() | |||
} | |||
for i, b := range eventBuses { | |||
logger.Info("stopConsensusNet: Stopping eventBus", "i", i) | |||
b.Stop() | |||
} | |||
logger.Info("stopConsensusNet: DONE", "n", len(reactors)) | |||
} | |||
// Ensure a testnet makes blocks | |||
func TestReactorBasic(t *testing.T) { | |||
N := 4 | |||
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter) | |||
reactors, eventChans, eventBuses := startConsensusNet(t, css, N) | |||
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses) | |||
// wait till everyone makes the first new block | |||
timeoutWaitGroup(t, N, func(j int) { | |||
<-eventChans[j] | |||
}, css) | |||
} | |||
// Ensure a testnet sends proposal heartbeats and makes blocks when there are txs | |||
func TestReactorProposalHeartbeats(t *testing.T) { | |||
N := 4 | |||
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter, | |||
func(c *cfg.Config) { | |||
c.Consensus.CreateEmptyBlocks = false | |||
}) | |||
reactors, eventChans, eventBuses := startConsensusNet(t, css, N) | |||
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses) | |||
heartbeatChans := make([]chan interface{}, N) | |||
var err error | |||
for i := 0; i < N; i++ { | |||
heartbeatChans[i] = make(chan interface{}, 1) | |||
err = eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryProposalHeartbeat, heartbeatChans[i]) | |||
require.NoError(t, err) | |||
} | |||
// wait till everyone sends a proposal heartbeat | |||
timeoutWaitGroup(t, N, func(j int) { | |||
<-heartbeatChans[j] | |||
}, css) | |||
// send a tx | |||
if err := css[3].mempool.CheckTx([]byte{1, 2, 3}, nil); err != nil { | |||
//t.Fatal(err) | |||
} | |||
// wait till everyone makes the first new block | |||
timeoutWaitGroup(t, N, func(j int) { | |||
<-eventChans[j] | |||
}, css) | |||
} | |||
// Test we record block parts from other peers | |||
func TestReactorRecordsBlockParts(t *testing.T) { | |||
// create dummy peer | |||
peer := p2pdummy.NewPeer() | |||
ps := NewPeerState(peer).SetLogger(log.TestingLogger()) | |||
peer.Set(types.PeerStateKey, ps) | |||
// create reactor | |||
css := randConsensusNet(1, "consensus_reactor_records_block_parts_test", newMockTickerFunc(true), newPersistentKVStore) | |||
reactor := NewConsensusReactor(css[0], false) // so we dont start the consensus states | |||
reactor.SetEventBus(css[0].eventBus) | |||
reactor.SetLogger(log.TestingLogger()) | |||
sw := p2p.MakeSwitch(cfg.DefaultP2PConfig(), 1, "testing", "123.123.123", func(i int, sw *p2p.Switch) *p2p.Switch { return sw }) | |||
reactor.SetSwitch(sw) | |||
err := reactor.Start() | |||
require.NoError(t, err) | |||
defer reactor.Stop() | |||
// 1) new block part | |||
parts := types.NewPartSetFromData(cmn.RandBytes(100), 10) | |||
msg := &BlockPartMessage{ | |||
Height: 2, | |||
Round: 0, | |||
Part: parts.GetPart(0), | |||
} | |||
bz, err := cdc.MarshalBinaryBare(msg) | |||
require.NoError(t, err) | |||
reactor.Receive(DataChannel, peer, bz) | |||
require.Equal(t, 1, ps.BlockPartsSent(), "number of block parts sent should have increased by 1") | |||
// 2) block part with the same height, but different round | |||
msg.Round = 1 | |||
bz, err = cdc.MarshalBinaryBare(msg) | |||
require.NoError(t, err) | |||
reactor.Receive(DataChannel, peer, bz) | |||
require.Equal(t, 1, ps.BlockPartsSent(), "number of block parts sent should stay the same") | |||
// 3) block part from earlier height | |||
msg.Height = 1 | |||
msg.Round = 0 | |||
bz, err = cdc.MarshalBinaryBare(msg) | |||
require.NoError(t, err) | |||
reactor.Receive(DataChannel, peer, bz) | |||
require.Equal(t, 1, ps.BlockPartsSent(), "number of block parts sent should stay the same") | |||
} | |||
// Test we record votes from other peers | |||
func TestReactorRecordsVotes(t *testing.T) { | |||
// create dummy peer | |||
peer := p2pdummy.NewPeer() | |||
ps := NewPeerState(peer).SetLogger(log.TestingLogger()) | |||
peer.Set(types.PeerStateKey, ps) | |||
// create reactor | |||
css := randConsensusNet(1, "consensus_reactor_records_votes_test", newMockTickerFunc(true), newPersistentKVStore) | |||
reactor := NewConsensusReactor(css[0], false) // so we dont start the consensus states | |||
reactor.SetEventBus(css[0].eventBus) | |||
reactor.SetLogger(log.TestingLogger()) | |||
sw := p2p.MakeSwitch(cfg.DefaultP2PConfig(), 1, "testing", "123.123.123", func(i int, sw *p2p.Switch) *p2p.Switch { return sw }) | |||
reactor.SetSwitch(sw) | |||
err := reactor.Start() | |||
require.NoError(t, err) | |||
defer reactor.Stop() | |||
_, val := css[0].state.Validators.GetByIndex(0) | |||
// 1) new vote | |||
vote := &types.Vote{ | |||
ValidatorIndex: 0, | |||
ValidatorAddress: val.Address, | |||
Height: 2, | |||
Round: 0, | |||
Timestamp: time.Now().UTC(), | |||
Type: types.VoteTypePrevote, | |||
BlockID: types.BlockID{}, | |||
} | |||
bz, err := cdc.MarshalBinaryBare(&VoteMessage{vote}) | |||
require.NoError(t, err) | |||
reactor.Receive(VoteChannel, peer, bz) | |||
assert.Equal(t, 1, ps.VotesSent(), "number of votes sent should have increased by 1") | |||
// 2) vote with the same height, but different round | |||
vote.Round = 1 | |||
bz, err = cdc.MarshalBinaryBare(&VoteMessage{vote}) | |||
require.NoError(t, err) | |||
reactor.Receive(VoteChannel, peer, bz) | |||
assert.Equal(t, 1, ps.VotesSent(), "number of votes sent should stay the same") | |||
// 3) vote from earlier height | |||
vote.Height = 1 | |||
vote.Round = 0 | |||
bz, err = cdc.MarshalBinaryBare(&VoteMessage{vote}) | |||
require.NoError(t, err) | |||
reactor.Receive(VoteChannel, peer, bz) | |||
assert.Equal(t, 1, ps.VotesSent(), "number of votes sent should stay the same") | |||
} | |||
//------------------------------------------------------------- | |||
// ensure we can make blocks despite cycling a validator set | |||
func TestReactorVotingPowerChange(t *testing.T) { | |||
nVals := 4 | |||
logger := log.TestingLogger() | |||
css := randConsensusNet(nVals, "consensus_voting_power_changes_test", newMockTickerFunc(true), newPersistentKVStore) | |||
reactors, eventChans, eventBuses := startConsensusNet(t, css, nVals) | |||
defer stopConsensusNet(logger, reactors, eventBuses) | |||
// map of active validators | |||
activeVals := make(map[string]struct{}) | |||
for i := 0; i < nVals; i++ { | |||
activeVals[string(css[i].privValidator.GetAddress())] = struct{}{} | |||
} | |||
// wait till everyone makes block 1 | |||
timeoutWaitGroup(t, nVals, func(j int) { | |||
<-eventChans[j] | |||
}, css) | |||
//--------------------------------------------------------------------------- | |||
logger.Debug("---------------------------- Testing changing the voting power of one validator a few times") | |||
val1PubKey := css[0].privValidator.GetPubKey() | |||
val1PubKeyABCI := types.TM2PB.PubKey(val1PubKey) | |||
updateValidatorTx := kvstore.MakeValSetChangeTx(val1PubKeyABCI, 25) | |||
previousTotalVotingPower := css[0].GetRoundState().LastValidators.TotalVotingPower() | |||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx) | |||
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx) | |||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css) | |||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css) | |||
if css[0].GetRoundState().LastValidators.TotalVotingPower() == previousTotalVotingPower { | |||
t.Fatalf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[0].GetRoundState().LastValidators.TotalVotingPower()) | |||
} | |||
updateValidatorTx = kvstore.MakeValSetChangeTx(val1PubKeyABCI, 2) | |||
previousTotalVotingPower = css[0].GetRoundState().LastValidators.TotalVotingPower() | |||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx) | |||
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx) | |||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css) | |||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css) | |||
if css[0].GetRoundState().LastValidators.TotalVotingPower() == previousTotalVotingPower { | |||
t.Fatalf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[0].GetRoundState().LastValidators.TotalVotingPower()) | |||
} | |||
updateValidatorTx = kvstore.MakeValSetChangeTx(val1PubKeyABCI, 26) | |||
previousTotalVotingPower = css[0].GetRoundState().LastValidators.TotalVotingPower() | |||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css, updateValidatorTx) | |||
waitForAndValidateBlockWithTx(t, nVals, activeVals, eventChans, css, updateValidatorTx) | |||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css) | |||
waitForAndValidateBlock(t, nVals, activeVals, eventChans, css) | |||
if css[0].GetRoundState().LastValidators.TotalVotingPower() == previousTotalVotingPower { | |||
t.Fatalf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[0].GetRoundState().LastValidators.TotalVotingPower()) | |||
} | |||
} | |||
func TestReactorValidatorSetChanges(t *testing.T) { | |||
nPeers := 7 | |||
nVals := 4 | |||
css := randConsensusNetWithPeers(nVals, nPeers, "consensus_val_set_changes_test", newMockTickerFunc(true), newPersistentKVStore) | |||
logger := log.TestingLogger() | |||
reactors, eventChans, eventBuses := startConsensusNet(t, css, nPeers) | |||
defer stopConsensusNet(logger, reactors, eventBuses) | |||
// map of active validators | |||
activeVals := make(map[string]struct{}) | |||
for i := 0; i < nVals; i++ { | |||
activeVals[string(css[i].privValidator.GetAddress())] = struct{}{} | |||
} | |||
// wait till everyone makes block 1 | |||
timeoutWaitGroup(t, nPeers, func(j int) { | |||
<-eventChans[j] | |||
}, css) | |||
//--------------------------------------------------------------------------- | |||
logger.Info("---------------------------- Testing adding one validator") | |||
newValidatorPubKey1 := css[nVals].privValidator.GetPubKey() | |||
valPubKey1ABCI := types.TM2PB.PubKey(newValidatorPubKey1) | |||
newValidatorTx1 := kvstore.MakeValSetChangeTx(valPubKey1ABCI, testMinPower) | |||
// wait till everyone makes block 2 | |||
// ensure the commit includes all validators | |||
// send newValTx to change vals in block 3 | |||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, newValidatorTx1) | |||
// wait till everyone makes block 3. | |||
// it includes the commit for block 2, which is by the original validator set | |||
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, newValidatorTx1) | |||
// wait till everyone makes block 4. | |||
// it includes the commit for block 3, which is by the original validator set | |||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css) | |||
// the commits for block 4 should be with the updated validator set | |||
activeVals[string(newValidatorPubKey1.Address())] = struct{}{} | |||
// wait till everyone makes block 5 | |||
// it includes the commit for block 4, which should have the updated validator set | |||
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css) | |||
//--------------------------------------------------------------------------- | |||
logger.Info("---------------------------- Testing changing the voting power of one validator") | |||
updateValidatorPubKey1 := css[nVals].privValidator.GetPubKey() | |||
updatePubKey1ABCI := types.TM2PB.PubKey(updateValidatorPubKey1) | |||
updateValidatorTx1 := kvstore.MakeValSetChangeTx(updatePubKey1ABCI, 25) | |||
previousTotalVotingPower := css[nVals].GetRoundState().LastValidators.TotalVotingPower() | |||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, updateValidatorTx1) | |||
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, updateValidatorTx1) | |||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css) | |||
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css) | |||
if css[nVals].GetRoundState().LastValidators.TotalVotingPower() == previousTotalVotingPower { | |||
t.Errorf("expected voting power to change (before: %d, after: %d)", previousTotalVotingPower, css[nVals].GetRoundState().LastValidators.TotalVotingPower()) | |||
} | |||
//--------------------------------------------------------------------------- | |||
logger.Info("---------------------------- Testing adding two validators at once") | |||
newValidatorPubKey2 := css[nVals+1].privValidator.GetPubKey() | |||
newVal2ABCI := types.TM2PB.PubKey(newValidatorPubKey2) | |||
newValidatorTx2 := kvstore.MakeValSetChangeTx(newVal2ABCI, testMinPower) | |||
newValidatorPubKey3 := css[nVals+2].privValidator.GetPubKey() | |||
newVal3ABCI := types.TM2PB.PubKey(newValidatorPubKey3) | |||
newValidatorTx3 := kvstore.MakeValSetChangeTx(newVal3ABCI, testMinPower) | |||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, newValidatorTx2, newValidatorTx3) | |||
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, newValidatorTx2, newValidatorTx3) | |||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css) | |||
activeVals[string(newValidatorPubKey2.Address())] = struct{}{} | |||
activeVals[string(newValidatorPubKey3.Address())] = struct{}{} | |||
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css) | |||
//--------------------------------------------------------------------------- | |||
logger.Info("---------------------------- Testing removing two validators at once") | |||
removeValidatorTx2 := kvstore.MakeValSetChangeTx(newVal2ABCI, 0) | |||
removeValidatorTx3 := kvstore.MakeValSetChangeTx(newVal3ABCI, 0) | |||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css, removeValidatorTx2, removeValidatorTx3) | |||
waitForAndValidateBlockWithTx(t, nPeers, activeVals, eventChans, css, removeValidatorTx2, removeValidatorTx3) | |||
waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css) | |||
delete(activeVals, string(newValidatorPubKey2.Address())) | |||
delete(activeVals, string(newValidatorPubKey3.Address())) | |||
waitForBlockWithUpdatedValsAndValidateIt(t, nPeers, activeVals, eventChans, css) | |||
} | |||
// Check we can make blocks with skip_timeout_commit=false | |||
func TestReactorWithTimeoutCommit(t *testing.T) { | |||
N := 4 | |||
css := randConsensusNet(N, "consensus_reactor_with_timeout_commit_test", newMockTickerFunc(false), newCounter) | |||
// override default SkipTimeoutCommit == true for tests | |||
for i := 0; i < N; i++ { | |||
css[i].config.SkipTimeoutCommit = false | |||
} | |||
reactors, eventChans, eventBuses := startConsensusNet(t, css, N-1) | |||
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses) | |||
// wait till everyone makes the first new block | |||
timeoutWaitGroup(t, N-1, func(j int) { | |||
<-eventChans[j] | |||
}, css) | |||
} | |||
func waitForAndValidateBlock(t *testing.T, n int, activeVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState, txs ...[]byte) { | |||
timeoutWaitGroup(t, n, func(j int) { | |||
css[j].Logger.Debug("waitForAndValidateBlock") | |||
newBlockI, ok := <-eventChans[j] | |||
if !ok { | |||
return | |||
} | |||
newBlock := newBlockI.(types.EventDataNewBlock).Block | |||
css[j].Logger.Debug("waitForAndValidateBlock: Got block", "height", newBlock.Height) | |||
err := validateBlock(newBlock, activeVals) | |||
assert.Nil(t, err) | |||
for _, tx := range txs { | |||
css[j].mempool.CheckTx(tx, nil) | |||
assert.Nil(t, err) | |||
} | |||
}, css) | |||
} | |||
func waitForAndValidateBlockWithTx(t *testing.T, n int, activeVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState, txs ...[]byte) { | |||
timeoutWaitGroup(t, n, func(j int) { | |||
ntxs := 0 | |||
BLOCK_TX_LOOP: | |||
for { | |||
css[j].Logger.Debug("waitForAndValidateBlockWithTx", "ntxs", ntxs) | |||
newBlockI, ok := <-eventChans[j] | |||
if !ok { | |||
return | |||
} | |||
newBlock := newBlockI.(types.EventDataNewBlock).Block | |||
css[j].Logger.Debug("waitForAndValidateBlockWithTx: Got block", "height", newBlock.Height) | |||
err := validateBlock(newBlock, activeVals) | |||
assert.Nil(t, err) | |||
// check that txs match the txs we're waiting for. | |||
// note they could be spread over multiple blocks, | |||
// but they should be in order. | |||
for _, tx := range newBlock.Data.Txs { | |||
assert.EqualValues(t, txs[ntxs], tx) | |||
ntxs++ | |||
} | |||
if ntxs == len(txs) { | |||
break BLOCK_TX_LOOP | |||
} | |||
} | |||
}, css) | |||
} | |||
func waitForBlockWithUpdatedValsAndValidateIt(t *testing.T, n int, updatedVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState) { | |||
timeoutWaitGroup(t, n, func(j int) { | |||
var newBlock *types.Block | |||
LOOP: | |||
for { | |||
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt") | |||
newBlockI, ok := <-eventChans[j] | |||
if !ok { | |||
return | |||
} | |||
newBlock = newBlockI.(types.EventDataNewBlock).Block | |||
if newBlock.LastCommit.Size() == len(updatedVals) { | |||
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt: Got block", "height", newBlock.Height) | |||
break LOOP | |||
} else { | |||
css[j].Logger.Debug("waitForBlockWithUpdatedValsAndValidateIt: Got block with no new validators. Skipping", "height", newBlock.Height) | |||
} | |||
} | |||
err := validateBlock(newBlock, updatedVals) | |||
assert.Nil(t, err) | |||
}, css) | |||
} | |||
// expects high synchrony! | |||
func validateBlock(block *types.Block, activeVals map[string]struct{}) error { | |||
if block.LastCommit.Size() != len(activeVals) { | |||
return fmt.Errorf("Commit size doesn't match number of active validators. Got %d, expected %d", block.LastCommit.Size(), len(activeVals)) | |||
} | |||
for _, vote := range block.LastCommit.Precommits { | |||
if _, ok := activeVals[string(vote.ValidatorAddress)]; !ok { | |||
return fmt.Errorf("Found vote for unactive validator %X", vote.ValidatorAddress) | |||
} | |||
} | |||
return nil | |||
} | |||
func timeoutWaitGroup(t *testing.T, n int, f func(int), css []*ConsensusState) { | |||
wg := new(sync.WaitGroup) | |||
wg.Add(n) | |||
for i := 0; i < n; i++ { | |||
go func(j int) { | |||
f(j) | |||
wg.Done() | |||
}(i) | |||
} | |||
done := make(chan struct{}) | |||
go func() { | |||
wg.Wait() | |||
close(done) | |||
}() | |||
// we're running many nodes in-process, possibly in in a virtual machine, | |||
// and spewing debug messages - making a block could take a while, | |||
timeout := time.Second * 300 | |||
select { | |||
case <-done: | |||
case <-time.After(timeout): | |||
for i, cs := range css { | |||
t.Log("#################") | |||
t.Log("Validator", i) | |||
t.Log(cs.GetRoundState()) | |||
t.Log("") | |||
} | |||
os.Stdout.Write([]byte("pprof.Lookup('goroutine'):\n")) | |||
pprof.Lookup("goroutine").WriteTo(os.Stdout, 1) | |||
capture() | |||
panic("Timed out waiting for all validators to commit a block") | |||
} | |||
} | |||
func capture() { | |||
trace := make([]byte, 10240000) | |||
count := runtime.Stack(trace, true) | |||
fmt.Printf("Stack of %d bytes: %s\n", count, trace) | |||
} |
@ -1,469 +0,0 @@ | |||
package consensus | |||
import ( | |||
"bytes" | |||
"fmt" | |||
"hash/crc32" | |||
"io" | |||
"reflect" | |||
//"strconv" | |||
//"strings" | |||
"time" | |||
abci "github.com/tendermint/abci/types" | |||
//auto "github.com/tendermint/tmlibs/autofile" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
dbm "github.com/tendermint/tmlibs/db" | |||
"github.com/tendermint/tmlibs/log" | |||
"github.com/tendermint/tendermint/proxy" | |||
sm "github.com/tendermint/tendermint/state" | |||
"github.com/tendermint/tendermint/types" | |||
"github.com/tendermint/tendermint/version" | |||
) | |||
var crc32c = crc32.MakeTable(crc32.Castagnoli) | |||
// Functionality to replay blocks and messages on recovery from a crash. | |||
// There are two general failure scenarios: | |||
// | |||
// 1. failure during consensus | |||
// 2. failure while applying the block | |||
// | |||
// The former is handled by the WAL, the latter by the proxyApp Handshake on | |||
// restart, which ultimately hands off the work to the WAL. | |||
//----------------------------------------- | |||
// 1. Recover from failure during consensus | |||
// (by replaying messages from the WAL) | |||
//----------------------------------------- | |||
// Unmarshal and apply a single message to the consensus state as if it were | |||
// received in receiveRoutine. Lines that start with "#" are ignored. | |||
// NOTE: receiveRoutine should not be running. | |||
func (cs *ConsensusState) readReplayMessage(msg *TimedWALMessage, newStepCh chan interface{}) error { | |||
// Skip meta messages which exist for demarcating boundaries. | |||
if _, ok := msg.Msg.(EndHeightMessage); ok { | |||
return nil | |||
} | |||
// for logging | |||
switch m := msg.Msg.(type) { | |||
case types.EventDataRoundState: | |||
cs.Logger.Info("Replay: New Step", "height", m.Height, "round", m.Round, "step", m.Step) | |||
// these are playback checks | |||
ticker := time.After(time.Second * 2) | |||
if newStepCh != nil { | |||
select { | |||
case mi := <-newStepCh: | |||
m2 := mi.(types.EventDataRoundState) | |||
if m.Height != m2.Height || m.Round != m2.Round || m.Step != m2.Step { | |||
return fmt.Errorf("RoundState mismatch. Got %v; Expected %v", m2, m) | |||
} | |||
case <-ticker: | |||
return fmt.Errorf("Failed to read off newStepCh") | |||
} | |||
} | |||
case msgInfo: | |||
peerID := m.PeerID | |||
if peerID == "" { | |||
peerID = "local" | |||
} | |||
switch msg := m.Msg.(type) { | |||
case *ProposalMessage: | |||
p := msg.Proposal | |||
cs.Logger.Info("Replay: Proposal", "height", p.Height, "round", p.Round, "header", | |||
p.BlockPartsHeader, "pol", p.POLRound, "peer", peerID) | |||
case *BlockPartMessage: | |||
cs.Logger.Info("Replay: BlockPart", "height", msg.Height, "round", msg.Round, "peer", peerID) | |||
case *VoteMessage: | |||
v := msg.Vote | |||
cs.Logger.Info("Replay: Vote", "height", v.Height, "round", v.Round, "type", v.Type, | |||
"blockID", v.BlockID, "peer", peerID) | |||
} | |||
cs.handleMsg(m) | |||
case timeoutInfo: | |||
cs.Logger.Info("Replay: Timeout", "height", m.Height, "round", m.Round, "step", m.Step, "dur", m.Duration) | |||
cs.handleTimeout(m, cs.RoundState) | |||
default: | |||
return fmt.Errorf("Replay: Unknown TimedWALMessage type: %v", reflect.TypeOf(msg.Msg)) | |||
} | |||
return nil | |||
} | |||
// Replay only those messages since the last block. `timeoutRoutine` should | |||
// run concurrently to read off tickChan. | |||
func (cs *ConsensusState) catchupReplay(csHeight int64) error { | |||
// Set replayMode to true so we don't log signing errors. | |||
cs.replayMode = true | |||
defer func() { cs.replayMode = false }() | |||
// Ensure that #ENDHEIGHT for this height doesn't exist. | |||
// NOTE: This is just a sanity check. As far as we know things work fine | |||
// without it, and Handshake could reuse ConsensusState if it weren't for | |||
// this check (since we can crash after writing #ENDHEIGHT). | |||
// | |||
// Ignore data corruption errors since this is a sanity check. | |||
gr, found, err := cs.wal.SearchForEndHeight(csHeight, &WALSearchOptions{IgnoreDataCorruptionErrors: true}) | |||
if err != nil { | |||
return err | |||
} | |||
if gr != nil { | |||
if err := gr.Close(); err != nil { | |||
return err | |||
} | |||
} | |||
if found { | |||
return fmt.Errorf("WAL should not contain #ENDHEIGHT %d", csHeight) | |||
} | |||
// Search for last height marker. | |||
// | |||
// Ignore data corruption errors in previous heights because we only care about last height | |||
gr, found, err = cs.wal.SearchForEndHeight(csHeight-1, &WALSearchOptions{IgnoreDataCorruptionErrors: true}) | |||
if err == io.EOF { | |||
cs.Logger.Error("Replay: wal.group.Search returned EOF", "#ENDHEIGHT", csHeight-1) | |||
} else if err != nil { | |||
return err | |||
} | |||
if !found { | |||
return fmt.Errorf("Cannot replay height %d. WAL does not contain #ENDHEIGHT for %d", csHeight, csHeight-1) | |||
} | |||
defer gr.Close() // nolint: errcheck | |||
cs.Logger.Info("Catchup by replaying consensus messages", "height", csHeight) | |||
var msg *TimedWALMessage | |||
dec := WALDecoder{gr} | |||
for { | |||
msg, err = dec.Decode() | |||
if err == io.EOF { | |||
break | |||
} else if IsDataCorruptionError(err) { | |||
cs.Logger.Debug("data has been corrupted in last height of consensus WAL", "err", err, "height", csHeight) | |||
panic(fmt.Sprintf("data has been corrupted (%v) in last height %d of consensus WAL", err, csHeight)) | |||
} else if err != nil { | |||
return err | |||
} | |||
// NOTE: since the priv key is set when the msgs are received | |||
// it will attempt to eg double sign but we can just ignore it | |||
// since the votes will be replayed and we'll get to the next step | |||
if err := cs.readReplayMessage(msg, nil); err != nil { | |||
return err | |||
} | |||
} | |||
cs.Logger.Info("Replay: Done") | |||
return nil | |||
} | |||
//-------------------------------------------------------------------------------- | |||
// Parses marker lines of the form: | |||
// #ENDHEIGHT: 12345 | |||
/* | |||
func makeHeightSearchFunc(height int64) auto.SearchFunc { | |||
return func(line string) (int, error) { | |||
line = strings.TrimRight(line, "\n") | |||
parts := strings.Split(line, " ") | |||
if len(parts) != 2 { | |||
return -1, errors.New("Line did not have 2 parts") | |||
} | |||
i, err := strconv.Atoi(parts[1]) | |||
if err != nil { | |||
return -1, errors.New("Failed to parse INFO: " + err.Error()) | |||
} | |||
if height < i { | |||
return 1, nil | |||
} else if height == i { | |||
return 0, nil | |||
} else { | |||
return -1, nil | |||
} | |||
} | |||
}*/ | |||
//--------------------------------------------------- | |||
// 2. Recover from failure while applying the block. | |||
// (by handshaking with the app to figure out where | |||
// we were last, and using the WAL to recover there.) | |||
//--------------------------------------------------- | |||
type Handshaker struct { | |||
stateDB dbm.DB | |||
initialState sm.State | |||
store sm.BlockStore | |||
genDoc *types.GenesisDoc | |||
logger log.Logger | |||
nBlocks int // number of blocks applied to the state | |||
} | |||
func NewHandshaker(stateDB dbm.DB, state sm.State, | |||
store sm.BlockStore, genDoc *types.GenesisDoc) *Handshaker { | |||
return &Handshaker{ | |||
stateDB: stateDB, | |||
initialState: state, | |||
store: store, | |||
genDoc: genDoc, | |||
logger: log.NewNopLogger(), | |||
nBlocks: 0, | |||
} | |||
} | |||
func (h *Handshaker) SetLogger(l log.Logger) { | |||
h.logger = l | |||
} | |||
func (h *Handshaker) NBlocks() int { | |||
return h.nBlocks | |||
} | |||
// TODO: retry the handshake/replay if it fails ? | |||
func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error { | |||
// Handshake is done via ABCI Info on the query conn. | |||
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{version.Version}) | |||
if err != nil { | |||
return fmt.Errorf("Error calling Info: %v", err) | |||
} | |||
blockHeight := int64(res.LastBlockHeight) | |||
if blockHeight < 0 { | |||
return fmt.Errorf("Got a negative last block height (%d) from the app", blockHeight) | |||
} | |||
appHash := res.LastBlockAppHash | |||
h.logger.Info("ABCI Handshake", "appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash)) | |||
// TODO: check app version. | |||
// Replay blocks up to the latest in the blockstore. | |||
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp) | |||
if err != nil { | |||
return fmt.Errorf("Error on replay: %v", err) | |||
} | |||
h.logger.Info("Completed ABCI Handshake - Tendermint and App are synced", | |||
"appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash)) | |||
// TODO: (on restart) replay mempool | |||
return nil | |||
} | |||
// Replay all blocks since appBlockHeight and ensure the result matches the current state. | |||
// Returns the final AppHash or an error. | |||
func (h *Handshaker) ReplayBlocks(state sm.State, appHash []byte, appBlockHeight int64, proxyApp proxy.AppConns) ([]byte, error) { | |||
storeBlockHeight := h.store.Height() | |||
stateBlockHeight := state.LastBlockHeight | |||
h.logger.Info("ABCI Replay Blocks", "appHeight", appBlockHeight, "storeHeight", storeBlockHeight, "stateHeight", stateBlockHeight) | |||
// If appBlockHeight == 0 it means that we are at genesis and hence should send InitChain | |||
if appBlockHeight == 0 { | |||
validators := types.TM2PB.Validators(state.Validators) | |||
csParams := types.TM2PB.ConsensusParams(h.genDoc.ConsensusParams) | |||
req := abci.RequestInitChain{ | |||
Time: h.genDoc.GenesisTime.Unix(), // TODO | |||
ChainId: h.genDoc.ChainID, | |||
ConsensusParams: csParams, | |||
Validators: validators, | |||
AppStateBytes: h.genDoc.AppStateJSON, | |||
} | |||
res, err := proxyApp.Consensus().InitChainSync(req) | |||
if err != nil { | |||
return nil, err | |||
} | |||
// if the app returned validators | |||
// or consensus params, update the state | |||
// with the them | |||
if len(res.Validators) > 0 { | |||
vals, err := types.PB2TM.Validators(res.Validators) | |||
if err != nil { | |||
return nil, err | |||
} | |||
state.Validators = types.NewValidatorSet(vals) | |||
} | |||
if res.ConsensusParams != nil { | |||
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams) | |||
} | |||
sm.SaveState(h.stateDB, state) | |||
} | |||
// First handle edge cases and constraints on the storeBlockHeight | |||
if storeBlockHeight == 0 { | |||
return appHash, checkAppHash(state, appHash) | |||
} else if storeBlockHeight < appBlockHeight { | |||
// the app should never be ahead of the store (but this is under app's control) | |||
return appHash, sm.ErrAppBlockHeightTooHigh{storeBlockHeight, appBlockHeight} | |||
} else if storeBlockHeight < stateBlockHeight { | |||
// the state should never be ahead of the store (this is under tendermint's control) | |||
cmn.PanicSanity(cmn.Fmt("StateBlockHeight (%d) > StoreBlockHeight (%d)", stateBlockHeight, storeBlockHeight)) | |||
} else if storeBlockHeight > stateBlockHeight+1 { | |||
// store should be at most one ahead of the state (this is under tendermint's control) | |||
cmn.PanicSanity(cmn.Fmt("StoreBlockHeight (%d) > StateBlockHeight + 1 (%d)", storeBlockHeight, stateBlockHeight+1)) | |||
} | |||
var err error | |||
// Now either store is equal to state, or one ahead. | |||
// For each, consider all cases of where the app could be, given app <= store | |||
if storeBlockHeight == stateBlockHeight { | |||
// Tendermint ran Commit and saved the state. | |||
// Either the app is asking for replay, or we're all synced up. | |||
if appBlockHeight < storeBlockHeight { | |||
// the app is behind, so replay blocks, but no need to go through WAL (state is already synced to store) | |||
return h.replayBlocks(state, proxyApp, appBlockHeight, storeBlockHeight, false) | |||
} else if appBlockHeight == storeBlockHeight { | |||
// We're good! | |||
return appHash, checkAppHash(state, appHash) | |||
} | |||
} else if storeBlockHeight == stateBlockHeight+1 { | |||
// We saved the block in the store but haven't updated the state, | |||
// so we'll need to replay a block using the WAL. | |||
if appBlockHeight < stateBlockHeight { | |||
// the app is further behind than it should be, so replay blocks | |||
// but leave the last block to go through the WAL | |||
return h.replayBlocks(state, proxyApp, appBlockHeight, storeBlockHeight, true) | |||
} else if appBlockHeight == stateBlockHeight { | |||
// We haven't run Commit (both the state and app are one block behind), | |||
// so replayBlock with the real app. | |||
// NOTE: We could instead use the cs.WAL on cs.Start, | |||
// but we'd have to allow the WAL to replay a block that wrote it's #ENDHEIGHT | |||
h.logger.Info("Replay last block using real app") | |||
state, err = h.replayBlock(state, storeBlockHeight, proxyApp.Consensus()) | |||
return state.AppHash, err | |||
} else if appBlockHeight == storeBlockHeight { | |||
// We ran Commit, but didn't save the state, so replayBlock with mock app | |||
abciResponses, err := sm.LoadABCIResponses(h.stateDB, storeBlockHeight) | |||
if err != nil { | |||
return nil, err | |||
} | |||
mockApp := newMockProxyApp(appHash, abciResponses) | |||
h.logger.Info("Replay last block using mock app") | |||
state, err = h.replayBlock(state, storeBlockHeight, mockApp) | |||
return state.AppHash, err | |||
} | |||
} | |||
cmn.PanicSanity("Should never happen") | |||
return nil, nil | |||
} | |||
func (h *Handshaker) replayBlocks(state sm.State, proxyApp proxy.AppConns, appBlockHeight, storeBlockHeight int64, mutateState bool) ([]byte, error) { | |||
// App is further behind than it should be, so we need to replay blocks. | |||
// We replay all blocks from appBlockHeight+1. | |||
// | |||
// Note that we don't have an old version of the state, | |||
// so we by-pass state validation/mutation using sm.ExecCommitBlock. | |||
// This also means we won't be saving validator sets if they change during this period. | |||
// TODO: Load the historical information to fix this and just use state.ApplyBlock | |||
// | |||
// If mutateState == true, the final block is replayed with h.replayBlock() | |||
var appHash []byte | |||
var err error | |||
finalBlock := storeBlockHeight | |||
if mutateState { | |||
finalBlock-- | |||
} | |||
for i := appBlockHeight + 1; i <= finalBlock; i++ { | |||
h.logger.Info("Applying block", "height", i) | |||
block := h.store.LoadBlock(i) | |||
appHash, err = sm.ExecCommitBlock(proxyApp.Consensus(), block, h.logger, state.LastValidators, h.stateDB) | |||
if err != nil { | |||
return nil, err | |||
} | |||
h.nBlocks++ | |||
} | |||
if mutateState { | |||
// sync the final block | |||
state, err = h.replayBlock(state, storeBlockHeight, proxyApp.Consensus()) | |||
if err != nil { | |||
return nil, err | |||
} | |||
appHash = state.AppHash | |||
} | |||
return appHash, checkAppHash(state, appHash) | |||
} | |||
// ApplyBlock on the proxyApp with the last block. | |||
func (h *Handshaker) replayBlock(state sm.State, height int64, proxyApp proxy.AppConnConsensus) (sm.State, error) { | |||
block := h.store.LoadBlock(height) | |||
meta := h.store.LoadBlockMeta(height) | |||
blockExec := sm.NewBlockExecutor(h.stateDB, h.logger, proxyApp, sm.MockMempool{}, sm.MockEvidencePool{}) | |||
var err error | |||
state, err = blockExec.ApplyBlock(state, meta.BlockID, block) | |||
if err != nil { | |||
return sm.State{}, err | |||
} | |||
h.nBlocks++ | |||
return state, nil | |||
} | |||
func checkAppHash(state sm.State, appHash []byte) error { | |||
if !bytes.Equal(state.AppHash, appHash) { | |||
panic(fmt.Errorf("Tendermint state.AppHash does not match AppHash after replay. Got %X, expected %X", appHash, state.AppHash).Error()) | |||
} | |||
return nil | |||
} | |||
//-------------------------------------------------------------------------------- | |||
// mockProxyApp uses ABCIResponses to give the right results | |||
// Useful because we don't want to call Commit() twice for the same block on the real app. | |||
func newMockProxyApp(appHash []byte, abciResponses *sm.ABCIResponses) proxy.AppConnConsensus { | |||
clientCreator := proxy.NewLocalClientCreator(&mockProxyApp{ | |||
appHash: appHash, | |||
abciResponses: abciResponses, | |||
}) | |||
cli, _ := clientCreator.NewABCIClient() | |||
err := cli.Start() | |||
if err != nil { | |||
panic(err) | |||
} | |||
return proxy.NewAppConnConsensus(cli) | |||
} | |||
type mockProxyApp struct { | |||
abci.BaseApplication | |||
appHash []byte | |||
txCount int | |||
abciResponses *sm.ABCIResponses | |||
} | |||
func (mock *mockProxyApp) DeliverTx(tx []byte) abci.ResponseDeliverTx { | |||
r := mock.abciResponses.DeliverTx[mock.txCount] | |||
mock.txCount++ | |||
return *r | |||
} | |||
func (mock *mockProxyApp) EndBlock(req abci.RequestEndBlock) abci.ResponseEndBlock { | |||
mock.txCount = 0 | |||
return *mock.abciResponses.EndBlock | |||
} | |||
func (mock *mockProxyApp) Commit() abci.ResponseCommit { | |||
return abci.ResponseCommit{Data: mock.appHash} | |||
} |
@ -1,321 +0,0 @@ | |||
package consensus | |||
import ( | |||
"bufio" | |||
"context" | |||
"fmt" | |||
"io" | |||
"os" | |||
"strconv" | |||
"strings" | |||
"github.com/pkg/errors" | |||
bc "github.com/tendermint/tendermint/blockchain" | |||
cfg "github.com/tendermint/tendermint/config" | |||
"github.com/tendermint/tendermint/proxy" | |||
sm "github.com/tendermint/tendermint/state" | |||
"github.com/tendermint/tendermint/types" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
dbm "github.com/tendermint/tmlibs/db" | |||
"github.com/tendermint/tmlibs/log" | |||
) | |||
const ( | |||
// event bus subscriber | |||
subscriber = "replay-file" | |||
) | |||
//-------------------------------------------------------- | |||
// replay messages interactively or all at once | |||
// replay the wal file | |||
func RunReplayFile(config cfg.BaseConfig, csConfig *cfg.ConsensusConfig, console bool) { | |||
consensusState := newConsensusStateForReplay(config, csConfig) | |||
if err := consensusState.ReplayFile(csConfig.WalFile(), console); err != nil { | |||
cmn.Exit(cmn.Fmt("Error during consensus replay: %v", err)) | |||
} | |||
} | |||
// Replay msgs in file or start the console | |||
func (cs *ConsensusState) ReplayFile(file string, console bool) error { | |||
if cs.IsRunning() { | |||
return errors.New("cs is already running, cannot replay") | |||
} | |||
if cs.wal != nil { | |||
return errors.New("cs wal is open, cannot replay") | |||
} | |||
cs.startForReplay() | |||
// ensure all new step events are regenerated as expected | |||
newStepCh := make(chan interface{}, 1) | |||
ctx := context.Background() | |||
err := cs.eventBus.Subscribe(ctx, subscriber, types.EventQueryNewRoundStep, newStepCh) | |||
if err != nil { | |||
return errors.Errorf("failed to subscribe %s to %v", subscriber, types.EventQueryNewRoundStep) | |||
} | |||
defer cs.eventBus.Unsubscribe(ctx, subscriber, types.EventQueryNewRoundStep) | |||
// just open the file for reading, no need to use wal | |||
fp, err := os.OpenFile(file, os.O_RDONLY, 0600) | |||
if err != nil { | |||
return err | |||
} | |||
pb := newPlayback(file, fp, cs, cs.state.Copy()) | |||
defer pb.fp.Close() // nolint: errcheck | |||
var nextN int // apply N msgs in a row | |||
var msg *TimedWALMessage | |||
for { | |||
if nextN == 0 && console { | |||
nextN = pb.replayConsoleLoop() | |||
} | |||
msg, err = pb.dec.Decode() | |||
if err == io.EOF { | |||
return nil | |||
} else if err != nil { | |||
return err | |||
} | |||
if err := pb.cs.readReplayMessage(msg, newStepCh); err != nil { | |||
return err | |||
} | |||
if nextN > 0 { | |||
nextN-- | |||
} | |||
pb.count++ | |||
} | |||
return nil | |||
} | |||
//------------------------------------------------ | |||
// playback manager | |||
type playback struct { | |||
cs *ConsensusState | |||
fp *os.File | |||
dec *WALDecoder | |||
count int // how many lines/msgs into the file are we | |||
// replays can be reset to beginning | |||
fileName string // so we can close/reopen the file | |||
genesisState sm.State // so the replay session knows where to restart from | |||
} | |||
func newPlayback(fileName string, fp *os.File, cs *ConsensusState, genState sm.State) *playback { | |||
return &playback{ | |||
cs: cs, | |||
fp: fp, | |||
fileName: fileName, | |||
genesisState: genState, | |||
dec: NewWALDecoder(fp), | |||
} | |||
} | |||
// go back count steps by resetting the state and running (pb.count - count) steps | |||
func (pb *playback) replayReset(count int, newStepCh chan interface{}) error { | |||
pb.cs.Stop() | |||
pb.cs.Wait() | |||
newCS := NewConsensusState(pb.cs.config, pb.genesisState.Copy(), pb.cs.blockExec, | |||
pb.cs.blockStore, pb.cs.mempool, pb.cs.evpool) | |||
newCS.SetEventBus(pb.cs.eventBus) | |||
newCS.startForReplay() | |||
if err := pb.fp.Close(); err != nil { | |||
return err | |||
} | |||
fp, err := os.OpenFile(pb.fileName, os.O_RDONLY, 0600) | |||
if err != nil { | |||
return err | |||
} | |||
pb.fp = fp | |||
pb.dec = NewWALDecoder(fp) | |||
count = pb.count - count | |||
fmt.Printf("Reseting from %d to %d\n", pb.count, count) | |||
pb.count = 0 | |||
pb.cs = newCS | |||
var msg *TimedWALMessage | |||
for i := 0; i < count; i++ { | |||
msg, err = pb.dec.Decode() | |||
if err == io.EOF { | |||
return nil | |||
} else if err != nil { | |||
return err | |||
} | |||
if err := pb.cs.readReplayMessage(msg, newStepCh); err != nil { | |||
return err | |||
} | |||
pb.count++ | |||
} | |||
return nil | |||
} | |||
func (cs *ConsensusState) startForReplay() { | |||
cs.Logger.Error("Replay commands are disabled until someone updates them and writes tests") | |||
/* TODO:! | |||
// since we replay tocks we just ignore ticks | |||
go func() { | |||
for { | |||
select { | |||
case <-cs.tickChan: | |||
case <-cs.Quit: | |||
return | |||
} | |||
} | |||
}()*/ | |||
} | |||
// console function for parsing input and running commands | |||
func (pb *playback) replayConsoleLoop() int { | |||
for { | |||
fmt.Printf("> ") | |||
bufReader := bufio.NewReader(os.Stdin) | |||
line, more, err := bufReader.ReadLine() | |||
if more { | |||
cmn.Exit("input is too long") | |||
} else if err != nil { | |||
cmn.Exit(err.Error()) | |||
} | |||
tokens := strings.Split(string(line), " ") | |||
if len(tokens) == 0 { | |||
continue | |||
} | |||
switch tokens[0] { | |||
case "next": | |||
// "next" -> replay next message | |||
// "next N" -> replay next N messages | |||
if len(tokens) == 1 { | |||
return 0 | |||
} | |||
i, err := strconv.Atoi(tokens[1]) | |||
if err != nil { | |||
fmt.Println("next takes an integer argument") | |||
} else { | |||
return i | |||
} | |||
case "back": | |||
// "back" -> go back one message | |||
// "back N" -> go back N messages | |||
// NOTE: "back" is not supported in the state machine design, | |||
// so we restart and replay up to | |||
ctx := context.Background() | |||
// ensure all new step events are regenerated as expected | |||
newStepCh := make(chan interface{}, 1) | |||
err := pb.cs.eventBus.Subscribe(ctx, subscriber, types.EventQueryNewRoundStep, newStepCh) | |||
if err != nil { | |||
cmn.Exit(fmt.Sprintf("failed to subscribe %s to %v", subscriber, types.EventQueryNewRoundStep)) | |||
} | |||
defer pb.cs.eventBus.Unsubscribe(ctx, subscriber, types.EventQueryNewRoundStep) | |||
if len(tokens) == 1 { | |||
if err := pb.replayReset(1, newStepCh); err != nil { | |||
pb.cs.Logger.Error("Replay reset error", "err", err) | |||
} | |||
} else { | |||
i, err := strconv.Atoi(tokens[1]) | |||
if err != nil { | |||
fmt.Println("back takes an integer argument") | |||
} else if i > pb.count { | |||
fmt.Printf("argument to back must not be larger than the current count (%d)\n", pb.count) | |||
} else { | |||
if err := pb.replayReset(i, newStepCh); err != nil { | |||
pb.cs.Logger.Error("Replay reset error", "err", err) | |||
} | |||
} | |||
} | |||
case "rs": | |||
// "rs" -> print entire round state | |||
// "rs short" -> print height/round/step | |||
// "rs <field>" -> print another field of the round state | |||
rs := pb.cs.RoundState | |||
if len(tokens) == 1 { | |||
fmt.Println(rs) | |||
} else { | |||
switch tokens[1] { | |||
case "short": | |||
fmt.Printf("%v/%v/%v\n", rs.Height, rs.Round, rs.Step) | |||
case "validators": | |||
fmt.Println(rs.Validators) | |||
case "proposal": | |||
fmt.Println(rs.Proposal) | |||
case "proposal_block": | |||
fmt.Printf("%v %v\n", rs.ProposalBlockParts.StringShort(), rs.ProposalBlock.StringShort()) | |||
case "locked_round": | |||
fmt.Println(rs.LockedRound) | |||
case "locked_block": | |||
fmt.Printf("%v %v\n", rs.LockedBlockParts.StringShort(), rs.LockedBlock.StringShort()) | |||
case "votes": | |||
fmt.Println(rs.Votes.StringIndented(" ")) | |||
default: | |||
fmt.Println("Unknown option", tokens[1]) | |||
} | |||
} | |||
case "n": | |||
fmt.Println(pb.count) | |||
} | |||
} | |||
return 0 | |||
} | |||
//-------------------------------------------------------------------------------- | |||
// convenience for replay mode | |||
func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusConfig) *ConsensusState { | |||
dbType := dbm.DBBackendType(config.DBBackend) | |||
// Get BlockStore | |||
blockStoreDB := dbm.NewDB("blockstore", dbType, config.DBDir()) | |||
blockStore := bc.NewBlockStore(blockStoreDB) | |||
// Get State | |||
stateDB := dbm.NewDB("state", dbType, config.DBDir()) | |||
gdoc, err := sm.MakeGenesisDocFromFile(config.GenesisFile()) | |||
if err != nil { | |||
cmn.Exit(err.Error()) | |||
} | |||
state, err := sm.MakeGenesisState(gdoc) | |||
if err != nil { | |||
cmn.Exit(err.Error()) | |||
} | |||
// Create proxyAppConn connection (consensus, mempool, query) | |||
clientCreator := proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir()) | |||
proxyApp := proxy.NewAppConns(clientCreator, | |||
NewHandshaker(stateDB, state, blockStore, gdoc)) | |||
err = proxyApp.Start() | |||
if err != nil { | |||
cmn.Exit(cmn.Fmt("Error starting proxy app conns: %v", err)) | |||
} | |||
eventBus := types.NewEventBus() | |||
if err := eventBus.Start(); err != nil { | |||
cmn.Exit(cmn.Fmt("Failed to start event bus: %v", err)) | |||
} | |||
mempool, evpool := sm.MockMempool{}, sm.MockEvidencePool{} | |||
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool) | |||
consensusState := NewConsensusState(csConfig, state.Copy(), blockExec, | |||
blockStore, mempool, evpool) | |||
consensusState.SetEventBus(eventBus) | |||
return consensusState | |||
} |
@ -1,687 +0,0 @@ | |||
package consensus | |||
import ( | |||
"bytes" | |||
"context" | |||
"errors" | |||
"fmt" | |||
"io" | |||
"io/ioutil" | |||
"os" | |||
"path" | |||
"runtime" | |||
"testing" | |||
"time" | |||
"github.com/stretchr/testify/assert" | |||
"github.com/stretchr/testify/require" | |||
"github.com/tendermint/abci/example/kvstore" | |||
abci "github.com/tendermint/abci/types" | |||
crypto "github.com/tendermint/go-crypto" | |||
auto "github.com/tendermint/tmlibs/autofile" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
dbm "github.com/tendermint/tmlibs/db" | |||
cfg "github.com/tendermint/tendermint/config" | |||
"github.com/tendermint/tendermint/privval" | |||
"github.com/tendermint/tendermint/proxy" | |||
sm "github.com/tendermint/tendermint/state" | |||
"github.com/tendermint/tendermint/types" | |||
"github.com/tendermint/tmlibs/log" | |||
) | |||
var consensusReplayConfig *cfg.Config | |||
func init() { | |||
consensusReplayConfig = ResetConfig("consensus_replay_test") | |||
} | |||
// These tests ensure we can always recover from failure at any part of the consensus process. | |||
// There are two general failure scenarios: failure during consensus, and failure while applying the block. | |||
// Only the latter interacts with the app and store, | |||
// but the former has to deal with restrictions on re-use of priv_validator keys. | |||
// The `WAL Tests` are for failures during the consensus; | |||
// the `Handshake Tests` are for failures in applying the block. | |||
// With the help of the WAL, we can recover from it all! | |||
//------------------------------------------------------------------------------------------ | |||
// WAL Tests | |||
// TODO: It would be better to verify explicitly which states we can recover from without the wal | |||
// and which ones we need the wal for - then we'd also be able to only flush the | |||
// wal writer when we need to, instead of with every message. | |||
func startNewConsensusStateAndWaitForBlock(t *testing.T, lastBlockHeight int64, blockDB dbm.DB, stateDB dbm.DB) { | |||
logger := log.TestingLogger() | |||
state, _ := sm.LoadStateFromDBOrGenesisFile(stateDB, consensusReplayConfig.GenesisFile()) | |||
privValidator := loadPrivValidator(consensusReplayConfig) | |||
cs := newConsensusStateWithConfigAndBlockStore(consensusReplayConfig, state, privValidator, kvstore.NewKVStoreApplication(), blockDB) | |||
cs.SetLogger(logger) | |||
bytes, _ := ioutil.ReadFile(cs.config.WalFile()) | |||
// fmt.Printf("====== WAL: \n\r%s\n", bytes) | |||
t.Logf("====== WAL: \n\r%X\n", bytes) | |||
err := cs.Start() | |||
require.NoError(t, err) | |||
defer cs.Stop() | |||
// This is just a signal that we haven't halted; its not something contained | |||
// in the WAL itself. Assuming the consensus state is running, replay of any | |||
// WAL, including the empty one, should eventually be followed by a new | |||
// block, or else something is wrong. | |||
newBlockCh := make(chan interface{}, 1) | |||
err = cs.eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, newBlockCh) | |||
require.NoError(t, err) | |||
select { | |||
case <-newBlockCh: | |||
case <-time.After(60 * time.Second): | |||
t.Fatalf("Timed out waiting for new block (see trace above)") | |||
} | |||
} | |||
func sendTxs(cs *ConsensusState, ctx context.Context) { | |||
for i := 0; i < 256; i++ { | |||
select { | |||
case <-ctx.Done(): | |||
return | |||
default: | |||
tx := []byte{byte(i)} | |||
cs.mempool.CheckTx(tx, nil) | |||
i++ | |||
} | |||
} | |||
} | |||
// TestWALCrash uses crashing WAL to test we can recover from any WAL failure. | |||
func TestWALCrash(t *testing.T) { | |||
testCases := []struct { | |||
name string | |||
initFn func(dbm.DB, *ConsensusState, context.Context) | |||
heightToStop int64 | |||
}{ | |||
{"empty block", | |||
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {}, | |||
1}, | |||
{"block with a smaller part size", | |||
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) { | |||
// XXX: is there a better way to change BlockPartSizeBytes? | |||
cs.state.ConsensusParams.BlockPartSizeBytes = 512 | |||
sm.SaveState(stateDB, cs.state) | |||
go sendTxs(cs, ctx) | |||
}, | |||
1}, | |||
{"many non-empty blocks", | |||
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) { | |||
go sendTxs(cs, ctx) | |||
}, | |||
3}, | |||
} | |||
for _, tc := range testCases { | |||
t.Run(tc.name, func(t *testing.T) { | |||
crashWALandCheckLiveness(t, tc.initFn, tc.heightToStop) | |||
}) | |||
} | |||
} | |||
func crashWALandCheckLiveness(t *testing.T, initFn func(dbm.DB, *ConsensusState, context.Context), heightToStop int64) { | |||
walPaniced := make(chan error) | |||
crashingWal := &crashingWAL{panicCh: walPaniced, heightToStop: heightToStop} | |||
i := 1 | |||
LOOP: | |||
for { | |||
// fmt.Printf("====== LOOP %d\n", i) | |||
t.Logf("====== LOOP %d\n", i) | |||
// create consensus state from a clean slate | |||
logger := log.NewNopLogger() | |||
stateDB := dbm.NewMemDB() | |||
state, _ := sm.MakeGenesisStateFromFile(consensusReplayConfig.GenesisFile()) | |||
privValidator := loadPrivValidator(consensusReplayConfig) | |||
blockDB := dbm.NewMemDB() | |||
cs := newConsensusStateWithConfigAndBlockStore(consensusReplayConfig, state, privValidator, kvstore.NewKVStoreApplication(), blockDB) | |||
cs.SetLogger(logger) | |||
// start sending transactions | |||
ctx, cancel := context.WithCancel(context.Background()) | |||
initFn(stateDB, cs, ctx) | |||
// clean up WAL file from the previous iteration | |||
walFile := cs.config.WalFile() | |||
os.Remove(walFile) | |||
// set crashing WAL | |||
csWal, err := cs.OpenWAL(walFile) | |||
require.NoError(t, err) | |||
crashingWal.next = csWal | |||
// reset the message counter | |||
crashingWal.msgIndex = 1 | |||
cs.wal = crashingWal | |||
// start consensus state | |||
err = cs.Start() | |||
require.NoError(t, err) | |||
i++ | |||
select { | |||
case err := <-walPaniced: | |||
t.Logf("WAL paniced: %v", err) | |||
// make sure we can make blocks after a crash | |||
startNewConsensusStateAndWaitForBlock(t, cs.Height, blockDB, stateDB) | |||
// stop consensus state and transactions sender (initFn) | |||
cs.Stop() | |||
cancel() | |||
// if we reached the required height, exit | |||
if _, ok := err.(ReachedHeightToStopError); ok { | |||
break LOOP | |||
} | |||
case <-time.After(10 * time.Second): | |||
t.Fatal("WAL did not panic for 10 seconds (check the log)") | |||
} | |||
} | |||
} | |||
// crashingWAL is a WAL which crashes or rather simulates a crash during Save | |||
// (before and after). It remembers a message for which we last panicked | |||
// (lastPanicedForMsgIndex), so we don't panic for it in subsequent iterations. | |||
type crashingWAL struct { | |||
next WAL | |||
panicCh chan error | |||
heightToStop int64 | |||
msgIndex int // current message index | |||
lastPanicedForMsgIndex int // last message for which we panicked | |||
} | |||
// WALWriteError indicates a WAL crash. | |||
type WALWriteError struct { | |||
msg string | |||
} | |||
func (e WALWriteError) Error() string { | |||
return e.msg | |||
} | |||
// ReachedHeightToStopError indicates we've reached the required consensus | |||
// height and may exit. | |||
type ReachedHeightToStopError struct { | |||
height int64 | |||
} | |||
func (e ReachedHeightToStopError) Error() string { | |||
return fmt.Sprintf("reached height to stop %d", e.height) | |||
} | |||
// Write simulate WAL's crashing by sending an error to the panicCh and then | |||
// exiting the cs.receiveRoutine. | |||
func (w *crashingWAL) Write(m WALMessage) { | |||
if endMsg, ok := m.(EndHeightMessage); ok { | |||
if endMsg.Height == w.heightToStop { | |||
w.panicCh <- ReachedHeightToStopError{endMsg.Height} | |||
runtime.Goexit() | |||
} else { | |||
w.next.Write(m) | |||
} | |||
return | |||
} | |||
if w.msgIndex > w.lastPanicedForMsgIndex { | |||
w.lastPanicedForMsgIndex = w.msgIndex | |||
_, file, line, _ := runtime.Caller(1) | |||
w.panicCh <- WALWriteError{fmt.Sprintf("failed to write %T to WAL (fileline: %s:%d)", m, file, line)} | |||
runtime.Goexit() | |||
} else { | |||
w.msgIndex++ | |||
w.next.Write(m) | |||
} | |||
} | |||
func (w *crashingWAL) WriteSync(m WALMessage) { | |||
w.Write(m) | |||
} | |||
func (w *crashingWAL) Group() *auto.Group { return w.next.Group() } | |||
func (w *crashingWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) { | |||
return w.next.SearchForEndHeight(height, options) | |||
} | |||
func (w *crashingWAL) Start() error { return w.next.Start() } | |||
func (w *crashingWAL) Stop() error { return w.next.Stop() } | |||
func (w *crashingWAL) Wait() { w.next.Wait() } | |||
//------------------------------------------------------------------------------------------ | |||
// Handshake Tests | |||
const ( | |||
NUM_BLOCKS = 6 | |||
) | |||
var ( | |||
mempool = sm.MockMempool{} | |||
evpool = sm.MockEvidencePool{} | |||
) | |||
//--------------------------------------- | |||
// Test handshake/replay | |||
// 0 - all synced up | |||
// 1 - saved block but app and state are behind | |||
// 2 - save block and committed but state is behind | |||
var modes = []uint{0, 1, 2} | |||
// Sync from scratch | |||
func TestHandshakeReplayAll(t *testing.T) { | |||
for _, m := range modes { | |||
testHandshakeReplay(t, 0, m) | |||
} | |||
} | |||
// Sync many, not from scratch | |||
func TestHandshakeReplaySome(t *testing.T) { | |||
for _, m := range modes { | |||
testHandshakeReplay(t, 1, m) | |||
} | |||
} | |||
// Sync from lagging by one | |||
func TestHandshakeReplayOne(t *testing.T) { | |||
for _, m := range modes { | |||
testHandshakeReplay(t, NUM_BLOCKS-1, m) | |||
} | |||
} | |||
// Sync from caught up | |||
func TestHandshakeReplayNone(t *testing.T) { | |||
for _, m := range modes { | |||
testHandshakeReplay(t, NUM_BLOCKS, m) | |||
} | |||
} | |||
func tempWALWithData(data []byte) string { | |||
walFile, err := ioutil.TempFile("", "wal") | |||
if err != nil { | |||
panic(fmt.Errorf("failed to create temp WAL file: %v", err)) | |||
} | |||
_, err = walFile.Write(data) | |||
if err != nil { | |||
panic(fmt.Errorf("failed to write to temp WAL file: %v", err)) | |||
} | |||
if err := walFile.Close(); err != nil { | |||
panic(fmt.Errorf("failed to close temp WAL file: %v", err)) | |||
} | |||
return walFile.Name() | |||
} | |||
// Make some blocks. Start a fresh app and apply nBlocks blocks. Then restart the app and sync it up with the remaining blocks | |||
func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) { | |||
config := ResetConfig("proxy_test_") | |||
walBody, err := WALWithNBlocks(NUM_BLOCKS) | |||
if err != nil { | |||
t.Fatal(err) | |||
} | |||
walFile := tempWALWithData(walBody) | |||
config.Consensus.SetWalFile(walFile) | |||
privVal := privval.LoadFilePV(config.PrivValidatorFile()) | |||
wal, err := NewWAL(walFile) | |||
if err != nil { | |||
t.Fatal(err) | |||
} | |||
wal.SetLogger(log.TestingLogger()) | |||
if err := wal.Start(); err != nil { | |||
t.Fatal(err) | |||
} | |||
defer wal.Stop() | |||
chain, commits, err := makeBlockchainFromWAL(wal) | |||
if err != nil { | |||
t.Fatalf(err.Error()) | |||
} | |||
stateDB, state, store := stateAndStore(config, privVal.GetPubKey()) | |||
store.chain = chain | |||
store.commits = commits | |||
// run the chain through state.ApplyBlock to build up the tendermint state | |||
state = buildTMStateFromChain(config, stateDB, state, chain, mode) | |||
latestAppHash := state.AppHash | |||
// make a new client creator | |||
kvstoreApp := kvstore.NewPersistentKVStoreApplication(path.Join(config.DBDir(), "2")) | |||
clientCreator2 := proxy.NewLocalClientCreator(kvstoreApp) | |||
if nBlocks > 0 { | |||
// run nBlocks against a new client to build up the app state. | |||
// use a throwaway tendermint state | |||
proxyApp := proxy.NewAppConns(clientCreator2, nil) | |||
stateDB, state, _ := stateAndStore(config, privVal.GetPubKey()) | |||
buildAppStateFromChain(proxyApp, stateDB, state, chain, nBlocks, mode) | |||
} | |||
// now start the app using the handshake - it should sync | |||
genDoc, _ := sm.MakeGenesisDocFromFile(config.GenesisFile()) | |||
handshaker := NewHandshaker(stateDB, state, store, genDoc) | |||
proxyApp := proxy.NewAppConns(clientCreator2, handshaker) | |||
if err := proxyApp.Start(); err != nil { | |||
t.Fatalf("Error starting proxy app connections: %v", err) | |||
} | |||
defer proxyApp.Stop() | |||
// get the latest app hash from the app | |||
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{""}) | |||
if err != nil { | |||
t.Fatal(err) | |||
} | |||
// the app hash should be synced up | |||
if !bytes.Equal(latestAppHash, res.LastBlockAppHash) { | |||
t.Fatalf("Expected app hashes to match after handshake/replay. got %X, expected %X", res.LastBlockAppHash, latestAppHash) | |||
} | |||
expectedBlocksToSync := NUM_BLOCKS - nBlocks | |||
if nBlocks == NUM_BLOCKS && mode > 0 { | |||
expectedBlocksToSync++ | |||
} else if nBlocks > 0 && mode == 1 { | |||
expectedBlocksToSync++ | |||
} | |||
if handshaker.NBlocks() != expectedBlocksToSync { | |||
t.Fatalf("Expected handshake to sync %d blocks, got %d", expectedBlocksToSync, handshaker.NBlocks()) | |||
} | |||
} | |||
func applyBlock(stateDB dbm.DB, st sm.State, blk *types.Block, proxyApp proxy.AppConns) sm.State { | |||
testPartSize := st.ConsensusParams.BlockPartSizeBytes | |||
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool) | |||
blkID := types.BlockID{blk.Hash(), blk.MakePartSet(testPartSize).Header()} | |||
newState, err := blockExec.ApplyBlock(st, blkID, blk) | |||
if err != nil { | |||
panic(err) | |||
} | |||
return newState | |||
} | |||
func buildAppStateFromChain(proxyApp proxy.AppConns, stateDB dbm.DB, | |||
state sm.State, chain []*types.Block, nBlocks int, mode uint) { | |||
// start a new app without handshake, play nBlocks blocks | |||
if err := proxyApp.Start(); err != nil { | |||
panic(err) | |||
} | |||
defer proxyApp.Stop() | |||
validators := types.TM2PB.Validators(state.Validators) | |||
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{ | |||
Validators: validators, | |||
}); err != nil { | |||
panic(err) | |||
} | |||
switch mode { | |||
case 0: | |||
for i := 0; i < nBlocks; i++ { | |||
block := chain[i] | |||
state = applyBlock(stateDB, state, block, proxyApp) | |||
} | |||
case 1, 2: | |||
for i := 0; i < nBlocks-1; i++ { | |||
block := chain[i] | |||
state = applyBlock(stateDB, state, block, proxyApp) | |||
} | |||
if mode == 2 { | |||
// update the kvstore height and apphash | |||
// as if we ran commit but not | |||
state = applyBlock(stateDB, state, chain[nBlocks-1], proxyApp) | |||
} | |||
} | |||
} | |||
func buildTMStateFromChain(config *cfg.Config, stateDB dbm.DB, state sm.State, chain []*types.Block, mode uint) sm.State { | |||
// run the whole chain against this client to build up the tendermint state | |||
clientCreator := proxy.NewLocalClientCreator(kvstore.NewPersistentKVStoreApplication(path.Join(config.DBDir(), "1"))) | |||
proxyApp := proxy.NewAppConns(clientCreator, nil) // sm.NewHandshaker(config, state, store, ReplayLastBlock)) | |||
if err := proxyApp.Start(); err != nil { | |||
panic(err) | |||
} | |||
defer proxyApp.Stop() | |||
validators := types.TM2PB.Validators(state.Validators) | |||
if _, err := proxyApp.Consensus().InitChainSync(abci.RequestInitChain{ | |||
Validators: validators, | |||
}); err != nil { | |||
panic(err) | |||
} | |||
switch mode { | |||
case 0: | |||
// sync right up | |||
for _, block := range chain { | |||
state = applyBlock(stateDB, state, block, proxyApp) | |||
} | |||
case 1, 2: | |||
// sync up to the penultimate as if we stored the block. | |||
// whether we commit or not depends on the appHash | |||
for _, block := range chain[:len(chain)-1] { | |||
state = applyBlock(stateDB, state, block, proxyApp) | |||
} | |||
// apply the final block to a state copy so we can | |||
// get the right next appHash but keep the state back | |||
applyBlock(stateDB, state, chain[len(chain)-1], proxyApp) | |||
} | |||
return state | |||
} | |||
//-------------------------- | |||
// utils for making blocks | |||
func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) { | |||
// Search for height marker | |||
gr, found, err := wal.SearchForEndHeight(0, &WALSearchOptions{}) | |||
if err != nil { | |||
return nil, nil, err | |||
} | |||
if !found { | |||
return nil, nil, errors.New(cmn.Fmt("WAL does not contain height %d.", 1)) | |||
} | |||
defer gr.Close() // nolint: errcheck | |||
// log.Notice("Build a blockchain by reading from the WAL") | |||
var blocks []*types.Block | |||
var commits []*types.Commit | |||
var thisBlockParts *types.PartSet | |||
var thisBlockCommit *types.Commit | |||
var height int64 | |||
dec := NewWALDecoder(gr) | |||
for { | |||
msg, err := dec.Decode() | |||
if err == io.EOF { | |||
break | |||
} else if err != nil { | |||
return nil, nil, err | |||
} | |||
piece := readPieceFromWAL(msg) | |||
if piece == nil { | |||
continue | |||
} | |||
switch p := piece.(type) { | |||
case EndHeightMessage: | |||
// if its not the first one, we have a full block | |||
if thisBlockParts != nil { | |||
var block = new(types.Block) | |||
_, err = cdc.UnmarshalBinaryReader(thisBlockParts.GetReader(), block, 0) | |||
if err != nil { | |||
panic(err) | |||
} | |||
if block.Height != height+1 { | |||
panic(cmn.Fmt("read bad block from wal. got height %d, expected %d", block.Height, height+1)) | |||
} | |||
commitHeight := thisBlockCommit.Precommits[0].Height | |||
if commitHeight != height+1 { | |||
panic(cmn.Fmt("commit doesnt match. got height %d, expected %d", commitHeight, height+1)) | |||
} | |||
blocks = append(blocks, block) | |||
commits = append(commits, thisBlockCommit) | |||
height++ | |||
} | |||
case *types.PartSetHeader: | |||
thisBlockParts = types.NewPartSetFromHeader(*p) | |||
case *types.Part: | |||
_, err := thisBlockParts.AddPart(p) | |||
if err != nil { | |||
return nil, nil, err | |||
} | |||
case *types.Vote: | |||
if p.Type == types.VoteTypePrecommit { | |||
thisBlockCommit = &types.Commit{ | |||
BlockID: p.BlockID, | |||
Precommits: []*types.Vote{p}, | |||
} | |||
} | |||
} | |||
} | |||
// grab the last block too | |||
var block = new(types.Block) | |||
_, err = cdc.UnmarshalBinaryReader(thisBlockParts.GetReader(), block, 0) | |||
if err != nil { | |||
panic(err) | |||
} | |||
if block.Height != height+1 { | |||
panic(cmn.Fmt("read bad block from wal. got height %d, expected %d", block.Height, height+1)) | |||
} | |||
commitHeight := thisBlockCommit.Precommits[0].Height | |||
if commitHeight != height+1 { | |||
panic(cmn.Fmt("commit doesnt match. got height %d, expected %d", commitHeight, height+1)) | |||
} | |||
blocks = append(blocks, block) | |||
commits = append(commits, thisBlockCommit) | |||
return blocks, commits, nil | |||
} | |||
func readPieceFromWAL(msg *TimedWALMessage) interface{} { | |||
// for logging | |||
switch m := msg.Msg.(type) { | |||
case msgInfo: | |||
switch msg := m.Msg.(type) { | |||
case *ProposalMessage: | |||
return &msg.Proposal.BlockPartsHeader | |||
case *BlockPartMessage: | |||
return msg.Part | |||
case *VoteMessage: | |||
return msg.Vote | |||
} | |||
case EndHeightMessage: | |||
return m | |||
} | |||
return nil | |||
} | |||
// fresh state and mock store | |||
func stateAndStore(config *cfg.Config, pubKey crypto.PubKey) (dbm.DB, sm.State, *mockBlockStore) { | |||
stateDB := dbm.NewMemDB() | |||
state, _ := sm.MakeGenesisStateFromFile(config.GenesisFile()) | |||
store := NewMockBlockStore(config, state.ConsensusParams) | |||
return stateDB, state, store | |||
} | |||
//---------------------------------- | |||
// mock block store | |||
type mockBlockStore struct { | |||
config *cfg.Config | |||
params types.ConsensusParams | |||
chain []*types.Block | |||
commits []*types.Commit | |||
} | |||
// TODO: NewBlockStore(db.NewMemDB) ... | |||
func NewMockBlockStore(config *cfg.Config, params types.ConsensusParams) *mockBlockStore { | |||
return &mockBlockStore{config, params, nil, nil} | |||
} | |||
func (bs *mockBlockStore) Height() int64 { return int64(len(bs.chain)) } | |||
func (bs *mockBlockStore) LoadBlock(height int64) *types.Block { return bs.chain[height-1] } | |||
func (bs *mockBlockStore) LoadBlockMeta(height int64) *types.BlockMeta { | |||
block := bs.chain[height-1] | |||
return &types.BlockMeta{ | |||
BlockID: types.BlockID{block.Hash(), block.MakePartSet(bs.params.BlockPartSizeBytes).Header()}, | |||
Header: block.Header, | |||
} | |||
} | |||
func (bs *mockBlockStore) LoadBlockPart(height int64, index int) *types.Part { return nil } | |||
func (bs *mockBlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) { | |||
} | |||
func (bs *mockBlockStore) LoadBlockCommit(height int64) *types.Commit { | |||
return bs.commits[height-1] | |||
} | |||
func (bs *mockBlockStore) LoadSeenCommit(height int64) *types.Commit { | |||
return bs.commits[height-1] | |||
} | |||
//---------------------------------------- | |||
func TestInitChainUpdateValidators(t *testing.T) { | |||
val, _ := types.RandValidator(true, 10) | |||
vals := types.NewValidatorSet([]*types.Validator{val}) | |||
app := &initChainApp{vals: types.TM2PB.Validators(vals)} | |||
clientCreator := proxy.NewLocalClientCreator(app) | |||
config := ResetConfig("proxy_test_") | |||
privVal := privval.LoadFilePV(config.PrivValidatorFile()) | |||
stateDB, state, store := stateAndStore(config, privVal.GetPubKey()) | |||
oldValAddr := state.Validators.Validators[0].Address | |||
// now start the app using the handshake - it should sync | |||
genDoc, _ := sm.MakeGenesisDocFromFile(config.GenesisFile()) | |||
handshaker := NewHandshaker(stateDB, state, store, genDoc) | |||
proxyApp := proxy.NewAppConns(clientCreator, handshaker) | |||
if err := proxyApp.Start(); err != nil { | |||
t.Fatalf("Error starting proxy app connections: %v", err) | |||
} | |||
defer proxyApp.Stop() | |||
// reload the state, check the validator set was updated | |||
state = sm.LoadState(stateDB) | |||
newValAddr := state.Validators.Validators[0].Address | |||
expectValAddr := val.Address | |||
assert.NotEqual(t, oldValAddr, newValAddr) | |||
assert.Equal(t, newValAddr, expectValAddr) | |||
} | |||
func newInitChainApp(vals []abci.Validator) *initChainApp { | |||
return &initChainApp{ | |||
vals: vals, | |||
} | |||
} | |||
// returns the vals on InitChain | |||
type initChainApp struct { | |||
abci.BaseApplication | |||
vals []abci.Validator | |||
} | |||
func (ica *initChainApp) InitChain(req abci.RequestInitChain) abci.ResponseInitChain { | |||
return abci.ResponseInitChain{ | |||
Validators: ica.vals, | |||
} | |||
} |
@ -1,134 +0,0 @@ | |||
package consensus | |||
import ( | |||
"time" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
"github.com/tendermint/tmlibs/log" | |||
) | |||
var ( | |||
tickTockBufferSize = 10 | |||
) | |||
// TimeoutTicker is a timer that schedules timeouts | |||
// conditional on the height/round/step in the timeoutInfo. | |||
// The timeoutInfo.Duration may be non-positive. | |||
type TimeoutTicker interface { | |||
Start() error | |||
Stop() error | |||
Chan() <-chan timeoutInfo // on which to receive a timeout | |||
ScheduleTimeout(ti timeoutInfo) // reset the timer | |||
SetLogger(log.Logger) | |||
} | |||
// timeoutTicker wraps time.Timer, | |||
// scheduling timeouts only for greater height/round/step | |||
// than what it's already seen. | |||
// Timeouts are scheduled along the tickChan, | |||
// and fired on the tockChan. | |||
type timeoutTicker struct { | |||
cmn.BaseService | |||
timer *time.Timer | |||
tickChan chan timeoutInfo // for scheduling timeouts | |||
tockChan chan timeoutInfo // for notifying about them | |||
} | |||
// NewTimeoutTicker returns a new TimeoutTicker. | |||
func NewTimeoutTicker() TimeoutTicker { | |||
tt := &timeoutTicker{ | |||
timer: time.NewTimer(0), | |||
tickChan: make(chan timeoutInfo, tickTockBufferSize), | |||
tockChan: make(chan timeoutInfo, tickTockBufferSize), | |||
} | |||
tt.BaseService = *cmn.NewBaseService(nil, "TimeoutTicker", tt) | |||
tt.stopTimer() // don't want to fire until the first scheduled timeout | |||
return tt | |||
} | |||
// OnStart implements cmn.Service. It starts the timeout routine. | |||
func (t *timeoutTicker) OnStart() error { | |||
go t.timeoutRoutine() | |||
return nil | |||
} | |||
// OnStop implements cmn.Service. It stops the timeout routine. | |||
func (t *timeoutTicker) OnStop() { | |||
t.BaseService.OnStop() | |||
t.stopTimer() | |||
} | |||
// Chan returns a channel on which timeouts are sent. | |||
func (t *timeoutTicker) Chan() <-chan timeoutInfo { | |||
return t.tockChan | |||
} | |||
// ScheduleTimeout schedules a new timeout by sending on the internal tickChan. | |||
// The timeoutRoutine is always available to read from tickChan, so this won't block. | |||
// The scheduling may fail if the timeoutRoutine has already scheduled a timeout for a later height/round/step. | |||
func (t *timeoutTicker) ScheduleTimeout(ti timeoutInfo) { | |||
t.tickChan <- ti | |||
} | |||
//------------------------------------------------------------- | |||
// stop the timer and drain if necessary | |||
func (t *timeoutTicker) stopTimer() { | |||
// Stop() returns false if it was already fired or was stopped | |||
if !t.timer.Stop() { | |||
select { | |||
case <-t.timer.C: | |||
default: | |||
t.Logger.Debug("Timer already stopped") | |||
} | |||
} | |||
} | |||
// send on tickChan to start a new timer. | |||
// timers are interupted and replaced by new ticks from later steps | |||
// timeouts of 0 on the tickChan will be immediately relayed to the tockChan | |||
func (t *timeoutTicker) timeoutRoutine() { | |||
t.Logger.Debug("Starting timeout routine") | |||
var ti timeoutInfo | |||
for { | |||
select { | |||
case newti := <-t.tickChan: | |||
t.Logger.Debug("Received tick", "old_ti", ti, "new_ti", newti) | |||
// ignore tickers for old height/round/step | |||
if newti.Height < ti.Height { | |||
continue | |||
} else if newti.Height == ti.Height { | |||
if newti.Round < ti.Round { | |||
continue | |||
} else if newti.Round == ti.Round { | |||
if ti.Step > 0 && newti.Step <= ti.Step { | |||
continue | |||
} | |||
} | |||
} | |||
// stop the last timer | |||
t.stopTimer() | |||
// update timeoutInfo and reset timer | |||
// NOTE time.Timer allows duration to be non-positive | |||
ti = newti | |||
t.timer.Reset(ti.Duration) | |||
t.Logger.Debug("Scheduled timeout", "dur", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step) | |||
case <-t.timer.C: | |||
t.Logger.Info("Timed out", "dur", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step) | |||
// go routine here guarantees timeoutRoutine doesn't block. | |||
// Determinism comes from playback in the receiveRoutine. | |||
// We can eliminate it by merging the timeoutRoutine into receiveRoutine | |||
// and managing the timeouts ourselves with a millisecond ticker | |||
go func(toi timeoutInfo) { t.tockChan <- toi }(ti) | |||
case <-t.Quit(): | |||
return | |||
} | |||
} | |||
} |
@ -1,261 +0,0 @@ | |||
package types | |||
import ( | |||
"errors" | |||
"fmt" | |||
"strings" | |||
"sync" | |||
"github.com/tendermint/tendermint/p2p" | |||
"github.com/tendermint/tendermint/types" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
type RoundVoteSet struct { | |||
Prevotes *types.VoteSet | |||
Precommits *types.VoteSet | |||
} | |||
var ( | |||
GotVoteFromUnwantedRoundError = errors.New("Peer has sent a vote that does not match our round for more than one round") | |||
) | |||
/* | |||
Keeps track of all VoteSets from round 0 to round 'round'. | |||
Also keeps track of up to one RoundVoteSet greater than | |||
'round' from each peer, to facilitate catchup syncing of commits. | |||
A commit is +2/3 precommits for a block at a round, | |||
but which round is not known in advance, so when a peer | |||
provides a precommit for a round greater than mtx.round, | |||
we create a new entry in roundVoteSets but also remember the | |||
peer to prevent abuse. | |||
We let each peer provide us with up to 2 unexpected "catchup" rounds. | |||
One for their LastCommit round, and another for the official commit round. | |||
*/ | |||
type HeightVoteSet struct { | |||
chainID string | |||
height int64 | |||
valSet *types.ValidatorSet | |||
mtx sync.Mutex | |||
round int // max tracked round | |||
roundVoteSets map[int]RoundVoteSet // keys: [0...round] | |||
peerCatchupRounds map[p2p.ID][]int // keys: peer.ID; values: at most 2 rounds | |||
} | |||
func NewHeightVoteSet(chainID string, height int64, valSet *types.ValidatorSet) *HeightVoteSet { | |||
hvs := &HeightVoteSet{ | |||
chainID: chainID, | |||
} | |||
hvs.Reset(height, valSet) | |||
return hvs | |||
} | |||
func (hvs *HeightVoteSet) Reset(height int64, valSet *types.ValidatorSet) { | |||
hvs.mtx.Lock() | |||
defer hvs.mtx.Unlock() | |||
hvs.height = height | |||
hvs.valSet = valSet | |||
hvs.roundVoteSets = make(map[int]RoundVoteSet) | |||
hvs.peerCatchupRounds = make(map[p2p.ID][]int) | |||
hvs.addRound(0) | |||
hvs.round = 0 | |||
} | |||
func (hvs *HeightVoteSet) Height() int64 { | |||
hvs.mtx.Lock() | |||
defer hvs.mtx.Unlock() | |||
return hvs.height | |||
} | |||
func (hvs *HeightVoteSet) Round() int { | |||
hvs.mtx.Lock() | |||
defer hvs.mtx.Unlock() | |||
return hvs.round | |||
} | |||
// Create more RoundVoteSets up to round. | |||
func (hvs *HeightVoteSet) SetRound(round int) { | |||
hvs.mtx.Lock() | |||
defer hvs.mtx.Unlock() | |||
if hvs.round != 0 && (round < hvs.round+1) { | |||
cmn.PanicSanity("SetRound() must increment hvs.round") | |||
} | |||
for r := hvs.round + 1; r <= round; r++ { | |||
if _, ok := hvs.roundVoteSets[r]; ok { | |||
continue // Already exists because peerCatchupRounds. | |||
} | |||
hvs.addRound(r) | |||
} | |||
hvs.round = round | |||
} | |||
func (hvs *HeightVoteSet) addRound(round int) { | |||
if _, ok := hvs.roundVoteSets[round]; ok { | |||
cmn.PanicSanity("addRound() for an existing round") | |||
} | |||
// log.Debug("addRound(round)", "round", round) | |||
prevotes := types.NewVoteSet(hvs.chainID, hvs.height, round, types.VoteTypePrevote, hvs.valSet) | |||
precommits := types.NewVoteSet(hvs.chainID, hvs.height, round, types.VoteTypePrecommit, hvs.valSet) | |||
hvs.roundVoteSets[round] = RoundVoteSet{ | |||
Prevotes: prevotes, | |||
Precommits: precommits, | |||
} | |||
} | |||
// Duplicate votes return added=false, err=nil. | |||
// By convention, peerID is "" if origin is self. | |||
func (hvs *HeightVoteSet) AddVote(vote *types.Vote, peerID p2p.ID) (added bool, err error) { | |||
hvs.mtx.Lock() | |||
defer hvs.mtx.Unlock() | |||
if !types.IsVoteTypeValid(vote.Type) { | |||
return | |||
} | |||
voteSet := hvs.getVoteSet(vote.Round, vote.Type) | |||
if voteSet == nil { | |||
if rndz := hvs.peerCatchupRounds[peerID]; len(rndz) < 2 { | |||
hvs.addRound(vote.Round) | |||
voteSet = hvs.getVoteSet(vote.Round, vote.Type) | |||
hvs.peerCatchupRounds[peerID] = append(rndz, vote.Round) | |||
} else { | |||
// punish peer | |||
err = GotVoteFromUnwantedRoundError | |||
return | |||
} | |||
} | |||
added, err = voteSet.AddVote(vote) | |||
return | |||
} | |||
func (hvs *HeightVoteSet) Prevotes(round int) *types.VoteSet { | |||
hvs.mtx.Lock() | |||
defer hvs.mtx.Unlock() | |||
return hvs.getVoteSet(round, types.VoteTypePrevote) | |||
} | |||
func (hvs *HeightVoteSet) Precommits(round int) *types.VoteSet { | |||
hvs.mtx.Lock() | |||
defer hvs.mtx.Unlock() | |||
return hvs.getVoteSet(round, types.VoteTypePrecommit) | |||
} | |||
// Last round and blockID that has +2/3 prevotes for a particular block or nil. | |||
// Returns -1 if no such round exists. | |||
func (hvs *HeightVoteSet) POLInfo() (polRound int, polBlockID types.BlockID) { | |||
hvs.mtx.Lock() | |||
defer hvs.mtx.Unlock() | |||
for r := hvs.round; r >= 0; r-- { | |||
rvs := hvs.getVoteSet(r, types.VoteTypePrevote) | |||
polBlockID, ok := rvs.TwoThirdsMajority() | |||
if ok { | |||
return r, polBlockID | |||
} | |||
} | |||
return -1, types.BlockID{} | |||
} | |||
func (hvs *HeightVoteSet) getVoteSet(round int, type_ byte) *types.VoteSet { | |||
rvs, ok := hvs.roundVoteSets[round] | |||
if !ok { | |||
return nil | |||
} | |||
switch type_ { | |||
case types.VoteTypePrevote: | |||
return rvs.Prevotes | |||
case types.VoteTypePrecommit: | |||
return rvs.Precommits | |||
default: | |||
cmn.PanicSanity(cmn.Fmt("Unexpected vote type %X", type_)) | |||
return nil | |||
} | |||
} | |||
// If a peer claims that it has 2/3 majority for given blockKey, call this. | |||
// NOTE: if there are too many peers, or too much peer churn, | |||
// this can cause memory issues. | |||
// TODO: implement ability to remove peers too | |||
func (hvs *HeightVoteSet) SetPeerMaj23(round int, type_ byte, peerID p2p.ID, blockID types.BlockID) error { | |||
hvs.mtx.Lock() | |||
defer hvs.mtx.Unlock() | |||
if !types.IsVoteTypeValid(type_) { | |||
return fmt.Errorf("SetPeerMaj23: Invalid vote type %v", type_) | |||
} | |||
voteSet := hvs.getVoteSet(round, type_) | |||
if voteSet == nil { | |||
return nil // something we don't know about yet | |||
} | |||
return voteSet.SetPeerMaj23(types.P2PID(peerID), blockID) | |||
} | |||
//--------------------------------------------------------- | |||
// string and json | |||
func (hvs *HeightVoteSet) String() string { | |||
return hvs.StringIndented("") | |||
} | |||
func (hvs *HeightVoteSet) StringIndented(indent string) string { | |||
hvs.mtx.Lock() | |||
defer hvs.mtx.Unlock() | |||
vsStrings := make([]string, 0, (len(hvs.roundVoteSets)+1)*2) | |||
// rounds 0 ~ hvs.round inclusive | |||
for round := 0; round <= hvs.round; round++ { | |||
voteSetString := hvs.roundVoteSets[round].Prevotes.StringShort() | |||
vsStrings = append(vsStrings, voteSetString) | |||
voteSetString = hvs.roundVoteSets[round].Precommits.StringShort() | |||
vsStrings = append(vsStrings, voteSetString) | |||
} | |||
// all other peer catchup rounds | |||
for round, roundVoteSet := range hvs.roundVoteSets { | |||
if round <= hvs.round { | |||
continue | |||
} | |||
voteSetString := roundVoteSet.Prevotes.StringShort() | |||
vsStrings = append(vsStrings, voteSetString) | |||
voteSetString = roundVoteSet.Precommits.StringShort() | |||
vsStrings = append(vsStrings, voteSetString) | |||
} | |||
return cmn.Fmt(`HeightVoteSet{H:%v R:0~%v | |||
%s %v | |||
%s}`, | |||
hvs.height, hvs.round, | |||
indent, strings.Join(vsStrings, "\n"+indent+" "), | |||
indent) | |||
} | |||
func (hvs *HeightVoteSet) MarshalJSON() ([]byte, error) { | |||
hvs.mtx.Lock() | |||
defer hvs.mtx.Unlock() | |||
allVotes := hvs.toAllRoundVotes() | |||
return cdc.MarshalJSON(allVotes) | |||
} | |||
func (hvs *HeightVoteSet) toAllRoundVotes() []roundVotes { | |||
totalRounds := hvs.round + 1 | |||
allVotes := make([]roundVotes, totalRounds) | |||
// rounds 0 ~ hvs.round inclusive | |||
for round := 0; round < totalRounds; round++ { | |||
allVotes[round] = roundVotes{ | |||
Round: round, | |||
Prevotes: hvs.roundVoteSets[round].Prevotes.VoteStrings(), | |||
PrevotesBitArray: hvs.roundVoteSets[round].Prevotes.BitArrayString(), | |||
Precommits: hvs.roundVoteSets[round].Precommits.VoteStrings(), | |||
PrecommitsBitArray: hvs.roundVoteSets[round].Precommits.BitArrayString(), | |||
} | |||
} | |||
// TODO: all other peer catchup rounds | |||
return allVotes | |||
} | |||
type roundVotes struct { | |||
Round int `json:"round"` | |||
Prevotes []string `json:"prevotes"` | |||
PrevotesBitArray string `json:"prevotes_bit_array"` | |||
Precommits []string `json:"precommits"` | |||
PrecommitsBitArray string `json:"precommits_bit_array"` | |||
} |
@ -1,69 +0,0 @@ | |||
package types | |||
import ( | |||
"testing" | |||
"time" | |||
cfg "github.com/tendermint/tendermint/config" | |||
"github.com/tendermint/tendermint/types" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
var config *cfg.Config // NOTE: must be reset for each _test.go file | |||
func init() { | |||
config = cfg.ResetTestRoot("consensus_height_vote_set_test") | |||
} | |||
func TestPeerCatchupRounds(t *testing.T) { | |||
valSet, privVals := types.RandValidatorSet(10, 1) | |||
hvs := NewHeightVoteSet(config.ChainID(), 1, valSet) | |||
vote999_0 := makeVoteHR(t, 1, 999, privVals, 0) | |||
added, err := hvs.AddVote(vote999_0, "peer1") | |||
if !added || err != nil { | |||
t.Error("Expected to successfully add vote from peer", added, err) | |||
} | |||
vote1000_0 := makeVoteHR(t, 1, 1000, privVals, 0) | |||
added, err = hvs.AddVote(vote1000_0, "peer1") | |||
if !added || err != nil { | |||
t.Error("Expected to successfully add vote from peer", added, err) | |||
} | |||
vote1001_0 := makeVoteHR(t, 1, 1001, privVals, 0) | |||
added, err = hvs.AddVote(vote1001_0, "peer1") | |||
if err != GotVoteFromUnwantedRoundError { | |||
t.Errorf("Expected GotVoteFromUnwantedRoundError, but got %v", err) | |||
} | |||
if added { | |||
t.Error("Expected to *not* add vote from peer, too many catchup rounds.") | |||
} | |||
added, err = hvs.AddVote(vote1001_0, "peer2") | |||
if !added || err != nil { | |||
t.Error("Expected to successfully add vote from another peer") | |||
} | |||
} | |||
func makeVoteHR(t *testing.T, height int64, round int, privVals []types.PrivValidator, valIndex int) *types.Vote { | |||
privVal := privVals[valIndex] | |||
vote := &types.Vote{ | |||
ValidatorAddress: privVal.GetAddress(), | |||
ValidatorIndex: valIndex, | |||
Height: height, | |||
Round: round, | |||
Timestamp: time.Now().UTC(), | |||
Type: types.VoteTypePrecommit, | |||
BlockID: types.BlockID{[]byte("fakehash"), types.PartSetHeader{}}, | |||
} | |||
chainID := config.ChainID() | |||
err := privVal.SignVote(chainID, vote) | |||
if err != nil { | |||
panic(cmn.Fmt("Error signing vote: %v", err)) | |||
return nil | |||
} | |||
return vote | |||
} |
@ -1,57 +0,0 @@ | |||
package types | |||
import ( | |||
"fmt" | |||
"time" | |||
"github.com/tendermint/tendermint/types" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
//----------------------------------------------------------------------------- | |||
// PeerRoundState contains the known state of a peer. | |||
// NOTE: Read-only when returned by PeerState.GetRoundState(). | |||
type PeerRoundState struct { | |||
Height int64 `json:"height"` // Height peer is at | |||
Round int `json:"round"` // Round peer is at, -1 if unknown. | |||
Step RoundStepType `json:"step"` // Step peer is at | |||
StartTime time.Time `json:"start_time"` // Estimated start of round 0 at this height | |||
Proposal bool `json:"proposal"` // True if peer has proposal for this round | |||
ProposalBlockPartsHeader types.PartSetHeader `json:"proposal_block_parts_header"` // | |||
ProposalBlockParts *cmn.BitArray `json:"proposal_block_parts"` // | |||
ProposalPOLRound int `json:"proposal_pol_round"` // Proposal's POL round. -1 if none. | |||
ProposalPOL *cmn.BitArray `json:"proposal_pol"` // nil until ProposalPOLMessage received. | |||
Prevotes *cmn.BitArray `json:"prevotes"` // All votes peer has for this round | |||
Precommits *cmn.BitArray `json:"precommits"` // All precommits peer has for this round | |||
LastCommitRound int `json:"last_commit_round"` // Round of commit for last height. -1 if none. | |||
LastCommit *cmn.BitArray `json:"last_commit"` // All commit precommits of commit for last height. | |||
CatchupCommitRound int `json:"catchup_commit_round"` // Round that we have commit for. Not necessarily unique. -1 if none. | |||
CatchupCommit *cmn.BitArray `json:"catchup_commit"` // All commit precommits peer has for this height & CatchupCommitRound | |||
} | |||
// String returns a string representation of the PeerRoundState | |||
func (prs PeerRoundState) String() string { | |||
return prs.StringIndented("") | |||
} | |||
// StringIndented returns a string representation of the PeerRoundState | |||
func (prs PeerRoundState) StringIndented(indent string) string { | |||
return fmt.Sprintf(`PeerRoundState{ | |||
%s %v/%v/%v @%v | |||
%s Proposal %v -> %v | |||
%s POL %v (round %v) | |||
%s Prevotes %v | |||
%s Precommits %v | |||
%s LastCommit %v (round %v) | |||
%s Catchup %v (round %v) | |||
%s}`, | |||
indent, prs.Height, prs.Round, prs.Step, prs.StartTime, | |||
indent, prs.ProposalBlockPartsHeader, prs.ProposalBlockParts, | |||
indent, prs.ProposalPOL, prs.ProposalPOLRound, | |||
indent, prs.Prevotes, | |||
indent, prs.Precommits, | |||
indent, prs.LastCommit, prs.LastCommitRound, | |||
indent, prs.CatchupCommit, prs.CatchupCommitRound, | |||
indent) | |||
} |
@ -1,164 +0,0 @@ | |||
package types | |||
import ( | |||
"encoding/json" | |||
"fmt" | |||
"time" | |||
"github.com/tendermint/tendermint/types" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
//----------------------------------------------------------------------------- | |||
// RoundStepType enum type | |||
// RoundStepType enumerates the state of the consensus state machine | |||
type RoundStepType uint8 // These must be numeric, ordered. | |||
// RoundStepType | |||
const ( | |||
RoundStepNewHeight = RoundStepType(0x01) // Wait til CommitTime + timeoutCommit | |||
RoundStepNewRound = RoundStepType(0x02) // Setup new round and go to RoundStepPropose | |||
RoundStepPropose = RoundStepType(0x03) // Did propose, gossip proposal | |||
RoundStepPrevote = RoundStepType(0x04) // Did prevote, gossip prevotes | |||
RoundStepPrevoteWait = RoundStepType(0x05) // Did receive any +2/3 prevotes, start timeout | |||
RoundStepPrecommit = RoundStepType(0x06) // Did precommit, gossip precommits | |||
RoundStepPrecommitWait = RoundStepType(0x07) // Did receive any +2/3 precommits, start timeout | |||
RoundStepCommit = RoundStepType(0x08) // Entered commit state machine | |||
// NOTE: RoundStepNewHeight acts as RoundStepCommitWait. | |||
) | |||
// String returns a string | |||
func (rs RoundStepType) String() string { | |||
switch rs { | |||
case RoundStepNewHeight: | |||
return "RoundStepNewHeight" | |||
case RoundStepNewRound: | |||
return "RoundStepNewRound" | |||
case RoundStepPropose: | |||
return "RoundStepPropose" | |||
case RoundStepPrevote: | |||
return "RoundStepPrevote" | |||
case RoundStepPrevoteWait: | |||
return "RoundStepPrevoteWait" | |||
case RoundStepPrecommit: | |||
return "RoundStepPrecommit" | |||
case RoundStepPrecommitWait: | |||
return "RoundStepPrecommitWait" | |||
case RoundStepCommit: | |||
return "RoundStepCommit" | |||
default: | |||
return "RoundStepUnknown" // Cannot panic. | |||
} | |||
} | |||
//----------------------------------------------------------------------------- | |||
// RoundState defines the internal consensus state. | |||
// NOTE: Not thread safe. Should only be manipulated by functions downstream | |||
// of the cs.receiveRoutine | |||
type RoundState struct { | |||
Height int64 `json:"height"` // Height we are working on | |||
Round int `json:"round"` | |||
Step RoundStepType `json:"step"` | |||
StartTime time.Time `json:"start_time"` | |||
CommitTime time.Time `json:"commit_time"` // Subjective time when +2/3 precommits for Block at Round were found | |||
Validators *types.ValidatorSet `json:"validators"` | |||
Proposal *types.Proposal `json:"proposal"` | |||
ProposalBlock *types.Block `json:"proposal_block"` | |||
ProposalBlockParts *types.PartSet `json:"proposal_block_parts"` | |||
LockedRound int `json:"locked_round"` | |||
LockedBlock *types.Block `json:"locked_block"` | |||
LockedBlockParts *types.PartSet `json:"locked_block_parts"` | |||
ValidRound int `json:"valid_round"` // Last known round with POL for non-nil valid block. | |||
ValidBlock *types.Block `json:"valid_block"` // Last known block of POL mentioned above. | |||
ValidBlockParts *types.PartSet `json:"valid_block_parts"` // Last known block parts of POL metnioned above. | |||
Votes *HeightVoteSet `json:"votes"` | |||
CommitRound int `json:"commit_round"` // | |||
LastCommit *types.VoteSet `json:"last_commit"` // Last precommits at Height-1 | |||
LastValidators *types.ValidatorSet `json:"last_validators"` | |||
} | |||
// Compressed version of the RoundState for use in RPC | |||
type RoundStateSimple struct { | |||
HeightRoundStep string `json:"height/round/step"` | |||
StartTime time.Time `json:"start_time"` | |||
ProposalBlockHash cmn.HexBytes `json:"proposal_block_hash"` | |||
LockedBlockHash cmn.HexBytes `json:"locked_block_hash"` | |||
ValidBlockHash cmn.HexBytes `json:"valid_block_hash"` | |||
Votes json.RawMessage `json:"height_vote_set"` | |||
} | |||
// Compress the RoundState to RoundStateSimple | |||
func (rs *RoundState) RoundStateSimple() RoundStateSimple { | |||
votesJSON, err := rs.Votes.MarshalJSON() | |||
if err != nil { | |||
panic(err) | |||
} | |||
return RoundStateSimple{ | |||
HeightRoundStep: fmt.Sprintf("%d/%d/%d", rs.Height, rs.Round, rs.Step), | |||
StartTime: rs.StartTime, | |||
ProposalBlockHash: rs.ProposalBlock.Hash(), | |||
LockedBlockHash: rs.LockedBlock.Hash(), | |||
ValidBlockHash: rs.ValidBlock.Hash(), | |||
Votes: votesJSON, | |||
} | |||
} | |||
// RoundStateEvent returns the H/R/S of the RoundState as an event. | |||
func (rs *RoundState) RoundStateEvent() types.EventDataRoundState { | |||
// XXX: copy the RoundState | |||
// if we want to avoid this, we may need synchronous events after all | |||
rsCopy := *rs | |||
edrs := types.EventDataRoundState{ | |||
Height: rs.Height, | |||
Round: rs.Round, | |||
Step: rs.Step.String(), | |||
RoundState: &rsCopy, | |||
} | |||
return edrs | |||
} | |||
// String returns a string | |||
func (rs *RoundState) String() string { | |||
return rs.StringIndented("") | |||
} | |||
// StringIndented returns a string | |||
func (rs *RoundState) StringIndented(indent string) string { | |||
return fmt.Sprintf(`RoundState{ | |||
%s H:%v R:%v S:%v | |||
%s StartTime: %v | |||
%s CommitTime: %v | |||
%s Validators: %v | |||
%s Proposal: %v | |||
%s ProposalBlock: %v %v | |||
%s LockedRound: %v | |||
%s LockedBlock: %v %v | |||
%s ValidRound: %v | |||
%s ValidBlock: %v %v | |||
%s Votes: %v | |||
%s LastCommit: %v | |||
%s LastValidators:%v | |||
%s}`, | |||
indent, rs.Height, rs.Round, rs.Step, | |||
indent, rs.StartTime, | |||
indent, rs.CommitTime, | |||
indent, rs.Validators.StringIndented(indent+" "), | |||
indent, rs.Proposal, | |||
indent, rs.ProposalBlockParts.StringShort(), rs.ProposalBlock.StringShort(), | |||
indent, rs.LockedRound, | |||
indent, rs.LockedBlockParts.StringShort(), rs.LockedBlock.StringShort(), | |||
indent, rs.ValidRound, | |||
indent, rs.ValidBlockParts.StringShort(), rs.ValidBlock.StringShort(), | |||
indent, rs.Votes.StringIndented(indent+" "), | |||
indent, rs.LastCommit.StringShort(), | |||
indent, rs.LastValidators.StringIndented(indent+" "), | |||
indent) | |||
} | |||
// StringShort returns a string | |||
func (rs *RoundState) StringShort() string { | |||
return fmt.Sprintf(`RoundState{H:%v R:%v S:%v ST:%v}`, | |||
rs.Height, rs.Round, rs.Step, rs.StartTime) | |||
} |
@ -1,95 +0,0 @@ | |||
package types | |||
import ( | |||
"testing" | |||
"time" | |||
"github.com/tendermint/go-amino" | |||
"github.com/tendermint/go-crypto" | |||
"github.com/tendermint/tendermint/types" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
func BenchmarkRoundStateDeepCopy(b *testing.B) { | |||
b.StopTimer() | |||
// Random validators | |||
nval, ntxs := 100, 100 | |||
vset, _ := types.RandValidatorSet(nval, 1) | |||
precommits := make([]*types.Vote, nval) | |||
blockID := types.BlockID{ | |||
Hash: cmn.RandBytes(20), | |||
PartsHeader: types.PartSetHeader{ | |||
Hash: cmn.RandBytes(20), | |||
}, | |||
} | |||
sig := crypto.SignatureEd25519{} | |||
for i := 0; i < nval; i++ { | |||
precommits[i] = &types.Vote{ | |||
ValidatorAddress: types.Address(cmn.RandBytes(20)), | |||
Timestamp: time.Now(), | |||
BlockID: blockID, | |||
Signature: sig, | |||
} | |||
} | |||
txs := make([]types.Tx, ntxs) | |||
for i := 0; i < ntxs; i++ { | |||
txs[i] = cmn.RandBytes(100) | |||
} | |||
// Random block | |||
block := &types.Block{ | |||
Header: &types.Header{ | |||
ChainID: cmn.RandStr(12), | |||
Time: time.Now(), | |||
LastBlockID: blockID, | |||
LastCommitHash: cmn.RandBytes(20), | |||
DataHash: cmn.RandBytes(20), | |||
ValidatorsHash: cmn.RandBytes(20), | |||
ConsensusHash: cmn.RandBytes(20), | |||
AppHash: cmn.RandBytes(20), | |||
LastResultsHash: cmn.RandBytes(20), | |||
EvidenceHash: cmn.RandBytes(20), | |||
}, | |||
Data: &types.Data{ | |||
Txs: txs, | |||
}, | |||
Evidence: types.EvidenceData{}, | |||
LastCommit: &types.Commit{ | |||
BlockID: blockID, | |||
Precommits: precommits, | |||
}, | |||
} | |||
parts := block.MakePartSet(4096) | |||
// Random Proposal | |||
proposal := &types.Proposal{ | |||
Timestamp: time.Now(), | |||
BlockPartsHeader: types.PartSetHeader{ | |||
Hash: cmn.RandBytes(20), | |||
}, | |||
POLBlockID: blockID, | |||
Signature: sig, | |||
} | |||
// Random HeightVoteSet | |||
// TODO: hvs := | |||
rs := &RoundState{ | |||
StartTime: time.Now(), | |||
CommitTime: time.Now(), | |||
Validators: vset, | |||
Proposal: proposal, | |||
ProposalBlock: block, | |||
ProposalBlockParts: parts, | |||
LockedBlock: block, | |||
LockedBlockParts: parts, | |||
ValidBlock: block, | |||
ValidBlockParts: parts, | |||
Votes: nil, // TODO | |||
LastCommit: nil, // TODO | |||
LastValidators: vset, | |||
} | |||
b.StartTimer() | |||
for i := 0; i < b.N; i++ { | |||
amino.DeepCopy(rs) | |||
} | |||
} |
@ -1,12 +0,0 @@ | |||
package types | |||
import ( | |||
"github.com/tendermint/go-amino" | |||
"github.com/tendermint/go-crypto" | |||
) | |||
var cdc = amino.NewCodec() | |||
func init() { | |||
crypto.RegisterAmino(cdc) | |||
} |
@ -1,13 +0,0 @@ | |||
package consensus | |||
import ( | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
// kind of arbitrary | |||
var Spec = "1" // async | |||
var Major = "0" // | |||
var Minor = "2" // replay refactor | |||
var Revision = "2" // validation -> commit | |||
var Version = cmn.Fmt("v%s/%s.%s.%s", Spec, Major, Minor, Revision) |
@ -1,323 +0,0 @@ | |||
package consensus | |||
import ( | |||
"encoding/binary" | |||
"fmt" | |||
"hash/crc32" | |||
"io" | |||
"path/filepath" | |||
"time" | |||
"github.com/pkg/errors" | |||
amino "github.com/tendermint/go-amino" | |||
"github.com/tendermint/tendermint/types" | |||
auto "github.com/tendermint/tmlibs/autofile" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
) | |||
const ( | |||
// must be greater than params.BlockGossip.BlockPartSizeBytes + a few bytes | |||
maxMsgSizeBytes = 1024 * 1024 // 1MB | |||
) | |||
//-------------------------------------------------------- | |||
// types and functions for savings consensus messages | |||
type TimedWALMessage struct { | |||
Time time.Time `json:"time"` // for debugging purposes | |||
Msg WALMessage `json:"msg"` | |||
} | |||
// EndHeightMessage marks the end of the given height inside WAL. | |||
// @internal used by scripts/wal2json util. | |||
type EndHeightMessage struct { | |||
Height int64 `json:"height"` | |||
} | |||
type WALMessage interface{} | |||
func RegisterWALMessages(cdc *amino.Codec) { | |||
cdc.RegisterInterface((*WALMessage)(nil), nil) | |||
cdc.RegisterConcrete(types.EventDataRoundState{}, "tendermint/wal/EventDataRoundState", nil) | |||
cdc.RegisterConcrete(msgInfo{}, "tendermint/wal/MsgInfo", nil) | |||
cdc.RegisterConcrete(timeoutInfo{}, "tendermint/wal/TimeoutInfo", nil) | |||
cdc.RegisterConcrete(EndHeightMessage{}, "tendermint/wal/EndHeightMessage", nil) | |||
} | |||
//-------------------------------------------------------- | |||
// Simple write-ahead logger | |||
// WAL is an interface for any write-ahead logger. | |||
type WAL interface { | |||
Write(WALMessage) | |||
WriteSync(WALMessage) | |||
Group() *auto.Group | |||
SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) | |||
Start() error | |||
Stop() error | |||
Wait() | |||
} | |||
// Write ahead logger writes msgs to disk before they are processed. | |||
// Can be used for crash-recovery and deterministic replay | |||
// TODO: currently the wal is overwritten during replay catchup | |||
// give it a mode so it's either reading or appending - must read to end to start appending again | |||
type baseWAL struct { | |||
cmn.BaseService | |||
group *auto.Group | |||
enc *WALEncoder | |||
} | |||
func NewWAL(walFile string) (*baseWAL, error) { | |||
err := cmn.EnsureDir(filepath.Dir(walFile), 0700) | |||
if err != nil { | |||
return nil, errors.Wrap(err, "failed to ensure WAL directory is in place") | |||
} | |||
group, err := auto.OpenGroup(walFile) | |||
if err != nil { | |||
return nil, err | |||
} | |||
wal := &baseWAL{ | |||
group: group, | |||
enc: NewWALEncoder(group), | |||
} | |||
wal.BaseService = *cmn.NewBaseService(nil, "baseWAL", wal) | |||
return wal, nil | |||
} | |||
func (wal *baseWAL) Group() *auto.Group { | |||
return wal.group | |||
} | |||
func (wal *baseWAL) OnStart() error { | |||
size, err := wal.group.Head.Size() | |||
if err != nil { | |||
return err | |||
} else if size == 0 { | |||
wal.WriteSync(EndHeightMessage{0}) | |||
} | |||
err = wal.group.Start() | |||
return err | |||
} | |||
func (wal *baseWAL) OnStop() { | |||
wal.group.Stop() | |||
wal.group.Close() | |||
} | |||
// Write is called in newStep and for each receive on the | |||
// peerMsgQueue and the timeoutTicker. | |||
// NOTE: does not call fsync() | |||
func (wal *baseWAL) Write(msg WALMessage) { | |||
if wal == nil { | |||
return | |||
} | |||
// Write the wal message | |||
if err := wal.enc.Encode(&TimedWALMessage{time.Now(), msg}); err != nil { | |||
panic(cmn.Fmt("Error writing msg to consensus wal: %v \n\nMessage: %v", err, msg)) | |||
} | |||
} | |||
// WriteSync is called when we receive a msg from ourselves | |||
// so that we write to disk before sending signed messages. | |||
// NOTE: calls fsync() | |||
func (wal *baseWAL) WriteSync(msg WALMessage) { | |||
if wal == nil { | |||
return | |||
} | |||
wal.Write(msg) | |||
if err := wal.group.Flush(); err != nil { | |||
panic(cmn.Fmt("Error flushing consensus wal buf to file. Error: %v \n", err)) | |||
} | |||
} | |||
// WALSearchOptions are optional arguments to SearchForEndHeight. | |||
type WALSearchOptions struct { | |||
// IgnoreDataCorruptionErrors set to true will result in skipping data corruption errors. | |||
IgnoreDataCorruptionErrors bool | |||
} | |||
// SearchForEndHeight searches for the EndHeightMessage with the given height | |||
// and returns an auto.GroupReader, whenever it was found or not and an error. | |||
// Group reader will be nil if found equals false. | |||
// | |||
// CONTRACT: caller must close group reader. | |||
func (wal *baseWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) { | |||
var msg *TimedWALMessage | |||
lastHeightFound := int64(-1) | |||
// NOTE: starting from the last file in the group because we're usually | |||
// searching for the last height. See replay.go | |||
min, max := wal.group.MinIndex(), wal.group.MaxIndex() | |||
wal.Logger.Debug("Searching for height", "height", height, "min", min, "max", max) | |||
for index := max; index >= min; index-- { | |||
gr, err = wal.group.NewReader(index) | |||
if err != nil { | |||
return nil, false, err | |||
} | |||
dec := NewWALDecoder(gr) | |||
for { | |||
msg, err = dec.Decode() | |||
if err == io.EOF { | |||
// OPTIMISATION: no need to look for height in older files if we've seen h < height | |||
if lastHeightFound > 0 && lastHeightFound < height { | |||
gr.Close() | |||
return nil, false, nil | |||
} | |||
// check next file | |||
break | |||
} | |||
if options.IgnoreDataCorruptionErrors && IsDataCorruptionError(err) { | |||
wal.Logger.Debug("Corrupted entry. Skipping...", "err", err) | |||
// do nothing | |||
continue | |||
} else if err != nil { | |||
gr.Close() | |||
return nil, false, err | |||
} | |||
if m, ok := msg.Msg.(EndHeightMessage); ok { | |||
lastHeightFound = m.Height | |||
if m.Height == height { // found | |||
wal.Logger.Debug("Found", "height", height, "index", index) | |||
return gr, true, nil | |||
} | |||
} | |||
} | |||
gr.Close() | |||
} | |||
return nil, false, nil | |||
} | |||
/////////////////////////////////////////////////////////////////////////////// | |||
// A WALEncoder writes custom-encoded WAL messages to an output stream. | |||
// | |||
// Format: 4 bytes CRC sum + 4 bytes length + arbitrary-length value (go-amino encoded) | |||
type WALEncoder struct { | |||
wr io.Writer | |||
} | |||
// NewWALEncoder returns a new encoder that writes to wr. | |||
func NewWALEncoder(wr io.Writer) *WALEncoder { | |||
return &WALEncoder{wr} | |||
} | |||
// Encode writes the custom encoding of v to the stream. | |||
func (enc *WALEncoder) Encode(v *TimedWALMessage) error { | |||
data := cdc.MustMarshalBinaryBare(v) | |||
crc := crc32.Checksum(data, crc32c) | |||
length := uint32(len(data)) | |||
totalLength := 8 + int(length) | |||
msg := make([]byte, totalLength) | |||
binary.BigEndian.PutUint32(msg[0:4], crc) | |||
binary.BigEndian.PutUint32(msg[4:8], length) | |||
copy(msg[8:], data) | |||
_, err := enc.wr.Write(msg) | |||
return err | |||
} | |||
/////////////////////////////////////////////////////////////////////////////// | |||
// IsDataCorruptionError returns true if data has been corrupted inside WAL. | |||
func IsDataCorruptionError(err error) bool { | |||
_, ok := err.(DataCorruptionError) | |||
return ok | |||
} | |||
// DataCorruptionError is an error that occures if data on disk was corrupted. | |||
type DataCorruptionError struct { | |||
cause error | |||
} | |||
func (e DataCorruptionError) Error() string { | |||
return fmt.Sprintf("DataCorruptionError[%v]", e.cause) | |||
} | |||
func (e DataCorruptionError) Cause() error { | |||
return e.cause | |||
} | |||
// A WALDecoder reads and decodes custom-encoded WAL messages from an input | |||
// stream. See WALEncoder for the format used. | |||
// | |||
// It will also compare the checksums and make sure data size is equal to the | |||
// length from the header. If that is not the case, error will be returned. | |||
type WALDecoder struct { | |||
rd io.Reader | |||
} | |||
// NewWALDecoder returns a new decoder that reads from rd. | |||
func NewWALDecoder(rd io.Reader) *WALDecoder { | |||
return &WALDecoder{rd} | |||
} | |||
// Decode reads the next custom-encoded value from its reader and returns it. | |||
func (dec *WALDecoder) Decode() (*TimedWALMessage, error) { | |||
b := make([]byte, 4) | |||
_, err := dec.rd.Read(b) | |||
if err == io.EOF { | |||
return nil, err | |||
} | |||
if err != nil { | |||
return nil, fmt.Errorf("failed to read checksum: %v", err) | |||
} | |||
crc := binary.BigEndian.Uint32(b) | |||
b = make([]byte, 4) | |||
_, err = dec.rd.Read(b) | |||
if err != nil { | |||
return nil, fmt.Errorf("failed to read length: %v", err) | |||
} | |||
length := binary.BigEndian.Uint32(b) | |||
if length > maxMsgSizeBytes { | |||
return nil, fmt.Errorf("length %d exceeded maximum possible value of %d bytes", length, maxMsgSizeBytes) | |||
} | |||
data := make([]byte, length) | |||
_, err = dec.rd.Read(data) | |||
if err != nil { | |||
return nil, fmt.Errorf("failed to read data: %v", err) | |||
} | |||
// check checksum before decoding data | |||
actualCRC := crc32.Checksum(data, crc32c) | |||
if actualCRC != crc { | |||
return nil, DataCorruptionError{fmt.Errorf("checksums do not match: (read: %v, actual: %v)", crc, actualCRC)} | |||
} | |||
var res = new(TimedWALMessage) // nolint: gosimple | |||
err = cdc.UnmarshalBinaryBare(data, res) | |||
if err != nil { | |||
return nil, DataCorruptionError{fmt.Errorf("failed to decode data: %v", err)} | |||
} | |||
return res, err | |||
} | |||
type nilWAL struct{} | |||
func (nilWAL) Write(m WALMessage) {} | |||
func (nilWAL) WriteSync(m WALMessage) {} | |||
func (nilWAL) Group() *auto.Group { return nil } | |||
func (nilWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) { | |||
return nil, false, nil | |||
} | |||
func (nilWAL) Start() error { return nil } | |||
func (nilWAL) Stop() error { return nil } | |||
func (nilWAL) Wait() {} |
@ -1,31 +0,0 @@ | |||
// +build gofuzz | |||
package consensus | |||
import ( | |||
"bytes" | |||
"io" | |||
) | |||
func Fuzz(data []byte) int { | |||
dec := NewWALDecoder(bytes.NewReader(data)) | |||
for { | |||
msg, err := dec.Decode() | |||
if err == io.EOF { | |||
break | |||
} | |||
if err != nil { | |||
if msg != nil { | |||
panic("msg != nil on error") | |||
} | |||
return 0 | |||
} | |||
var w bytes.Buffer | |||
enc := NewWALEncoder(&w) | |||
err = enc.Encode(msg) | |||
if err != nil { | |||
panic(err) | |||
} | |||
} | |||
return 1 | |||
} |
@ -1,205 +0,0 @@ | |||
package consensus | |||
import ( | |||
"bufio" | |||
"bytes" | |||
"fmt" | |||
"os" | |||
"path/filepath" | |||
"strings" | |||
"time" | |||
"github.com/pkg/errors" | |||
"github.com/tendermint/abci/example/kvstore" | |||
bc "github.com/tendermint/tendermint/blockchain" | |||
cfg "github.com/tendermint/tendermint/config" | |||
"github.com/tendermint/tendermint/privval" | |||
"github.com/tendermint/tendermint/proxy" | |||
sm "github.com/tendermint/tendermint/state" | |||
"github.com/tendermint/tendermint/types" | |||
auto "github.com/tendermint/tmlibs/autofile" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
"github.com/tendermint/tmlibs/db" | |||
"github.com/tendermint/tmlibs/log" | |||
) | |||
// WALWithNBlocks generates a consensus WAL. It does this by spining up a | |||
// stripped down version of node (proxy app, event bus, consensus state) with a | |||
// persistent kvstore application and special consensus wal instance | |||
// (byteBufferWAL) and waits until numBlocks are created. Then it returns a WAL | |||
// content. | |||
func WALWithNBlocks(numBlocks int) (data []byte, err error) { | |||
config := getConfig() | |||
app := kvstore.NewPersistentKVStoreApplication(filepath.Join(config.DBDir(), "wal_generator")) | |||
logger := log.TestingLogger().With("wal_generator", "wal_generator") | |||
logger.Info("generating WAL (last height msg excluded)", "numBlocks", numBlocks) | |||
///////////////////////////////////////////////////////////////////////////// | |||
// COPY PASTE FROM node.go WITH A FEW MODIFICATIONS | |||
// NOTE: we can't import node package because of circular dependency | |||
privValidatorFile := config.PrivValidatorFile() | |||
privValidator := privval.LoadOrGenFilePV(privValidatorFile) | |||
genDoc, err := types.GenesisDocFromFile(config.GenesisFile()) | |||
if err != nil { | |||
return nil, errors.Wrap(err, "failed to read genesis file") | |||
} | |||
stateDB := db.NewMemDB() | |||
blockStoreDB := db.NewMemDB() | |||
state, err := sm.MakeGenesisState(genDoc) | |||
if err != nil { | |||
return nil, errors.Wrap(err, "failed to make genesis state") | |||
} | |||
blockStore := bc.NewBlockStore(blockStoreDB) | |||
handshaker := NewHandshaker(stateDB, state, blockStore, genDoc) | |||
proxyApp := proxy.NewAppConns(proxy.NewLocalClientCreator(app), handshaker) | |||
proxyApp.SetLogger(logger.With("module", "proxy")) | |||
if err := proxyApp.Start(); err != nil { | |||
return nil, errors.Wrap(err, "failed to start proxy app connections") | |||
} | |||
defer proxyApp.Stop() | |||
eventBus := types.NewEventBus() | |||
eventBus.SetLogger(logger.With("module", "events")) | |||
if err := eventBus.Start(); err != nil { | |||
return nil, errors.Wrap(err, "failed to start event bus") | |||
} | |||
defer eventBus.Stop() | |||
mempool := sm.MockMempool{} | |||
evpool := sm.MockEvidencePool{} | |||
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool) | |||
consensusState := NewConsensusState(config.Consensus, state.Copy(), blockExec, blockStore, mempool, evpool) | |||
consensusState.SetLogger(logger) | |||
consensusState.SetEventBus(eventBus) | |||
if privValidator != nil { | |||
consensusState.SetPrivValidator(privValidator) | |||
} | |||
// END OF COPY PASTE | |||
///////////////////////////////////////////////////////////////////////////// | |||
// set consensus wal to buffered WAL, which will write all incoming msgs to buffer | |||
var b bytes.Buffer | |||
wr := bufio.NewWriter(&b) | |||
numBlocksWritten := make(chan struct{}) | |||
wal := newByteBufferWAL(logger, NewWALEncoder(wr), int64(numBlocks), numBlocksWritten) | |||
// see wal.go#103 | |||
wal.Write(EndHeightMessage{0}) | |||
consensusState.wal = wal | |||
if err := consensusState.Start(); err != nil { | |||
return nil, errors.Wrap(err, "failed to start consensus state") | |||
} | |||
defer consensusState.Stop() | |||
select { | |||
case <-numBlocksWritten: | |||
wr.Flush() | |||
return b.Bytes(), nil | |||
case <-time.After(1 * time.Minute): | |||
wr.Flush() | |||
return b.Bytes(), fmt.Errorf("waited too long for tendermint to produce %d blocks (grep logs for `wal_generator`)", numBlocks) | |||
} | |||
} | |||
// f**ing long, but unique for each test | |||
func makePathname() string { | |||
// get path | |||
p, err := os.Getwd() | |||
if err != nil { | |||
panic(err) | |||
} | |||
// fmt.Println(p) | |||
sep := string(filepath.Separator) | |||
return strings.Replace(p, sep, "_", -1) | |||
} | |||
func randPort() int { | |||
// returns between base and base + spread | |||
base, spread := 20000, 20000 | |||
return base + cmn.RandIntn(spread) | |||
} | |||
func makeAddrs() (string, string, string) { | |||
start := randPort() | |||
return fmt.Sprintf("tcp://0.0.0.0:%d", start), | |||
fmt.Sprintf("tcp://0.0.0.0:%d", start+1), | |||
fmt.Sprintf("tcp://0.0.0.0:%d", start+2) | |||
} | |||
// getConfig returns a config for test cases | |||
func getConfig() *cfg.Config { | |||
pathname := makePathname() | |||
c := cfg.ResetTestRoot(fmt.Sprintf("%s_%d", pathname, cmn.RandInt())) | |||
// and we use random ports to run in parallel | |||
tm, rpc, grpc := makeAddrs() | |||
c.P2P.ListenAddress = tm | |||
c.RPC.ListenAddress = rpc | |||
c.RPC.GRPCListenAddress = grpc | |||
return c | |||
} | |||
// byteBufferWAL is a WAL which writes all msgs to a byte buffer. Writing stops | |||
// when the heightToStop is reached. Client will be notified via | |||
// signalWhenStopsTo channel. | |||
type byteBufferWAL struct { | |||
enc *WALEncoder | |||
stopped bool | |||
heightToStop int64 | |||
signalWhenStopsTo chan<- struct{} | |||
logger log.Logger | |||
} | |||
// needed for determinism | |||
var fixedTime, _ = time.Parse(time.RFC3339, "2017-01-02T15:04:05Z") | |||
func newByteBufferWAL(logger log.Logger, enc *WALEncoder, nBlocks int64, signalStop chan<- struct{}) *byteBufferWAL { | |||
return &byteBufferWAL{ | |||
enc: enc, | |||
heightToStop: nBlocks, | |||
signalWhenStopsTo: signalStop, | |||
logger: logger, | |||
} | |||
} | |||
// Save writes message to the internal buffer except when heightToStop is | |||
// reached, in which case it will signal the caller via signalWhenStopsTo and | |||
// skip writing. | |||
func (w *byteBufferWAL) Write(m WALMessage) { | |||
if w.stopped { | |||
w.logger.Debug("WAL already stopped. Not writing message", "msg", m) | |||
return | |||
} | |||
if endMsg, ok := m.(EndHeightMessage); ok { | |||
w.logger.Debug("WAL write end height message", "height", endMsg.Height, "stopHeight", w.heightToStop) | |||
if endMsg.Height == w.heightToStop { | |||
w.logger.Debug("Stopping WAL at height", "height", endMsg.Height) | |||
w.signalWhenStopsTo <- struct{}{} | |||
w.stopped = true | |||
return | |||
} | |||
} | |||
w.logger.Debug("WAL Write Message", "msg", m) | |||
err := w.enc.Encode(&TimedWALMessage{fixedTime, m}) | |||
if err != nil { | |||
panic(fmt.Sprintf("failed to encode the msg %v", m)) | |||
} | |||
} | |||
func (w *byteBufferWAL) WriteSync(m WALMessage) { | |||
w.Write(m) | |||
} | |||
func (w *byteBufferWAL) Group() *auto.Group { | |||
panic("not implemented") | |||
} | |||
func (w *byteBufferWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) { | |||
return nil, false, nil | |||
} | |||
func (w *byteBufferWAL) Start() error { return nil } | |||
func (w *byteBufferWAL) Stop() error { return nil } | |||
func (w *byteBufferWAL) Wait() {} |
@ -1,133 +0,0 @@ | |||
package consensus | |||
import ( | |||
"bytes" | |||
"crypto/rand" | |||
// "sync" | |||
"testing" | |||
"time" | |||
"github.com/tendermint/tendermint/consensus/types" | |||
tmtypes "github.com/tendermint/tendermint/types" | |||
cmn "github.com/tendermint/tmlibs/common" | |||
"github.com/stretchr/testify/assert" | |||
"github.com/stretchr/testify/require" | |||
) | |||
func TestWALEncoderDecoder(t *testing.T) { | |||
now := time.Now() | |||
msgs := []TimedWALMessage{ | |||
TimedWALMessage{Time: now, Msg: EndHeightMessage{0}}, | |||
TimedWALMessage{Time: now, Msg: timeoutInfo{Duration: time.Second, Height: 1, Round: 1, Step: types.RoundStepPropose}}, | |||
} | |||
b := new(bytes.Buffer) | |||
for _, msg := range msgs { | |||
b.Reset() | |||
enc := NewWALEncoder(b) | |||
err := enc.Encode(&msg) | |||
require.NoError(t, err) | |||
dec := NewWALDecoder(b) | |||
decoded, err := dec.Decode() | |||
require.NoError(t, err) | |||
assert.Equal(t, msg.Time.UTC(), decoded.Time) | |||
assert.Equal(t, msg.Msg, decoded.Msg) | |||
} | |||
} | |||
func TestWALSearchForEndHeight(t *testing.T) { | |||
walBody, err := WALWithNBlocks(6) | |||
if err != nil { | |||
t.Fatal(err) | |||
} | |||
walFile := tempWALWithData(walBody) | |||
wal, err := NewWAL(walFile) | |||
if err != nil { | |||
t.Fatal(err) | |||
} | |||
h := int64(3) | |||
gr, found, err := wal.SearchForEndHeight(h, &WALSearchOptions{}) | |||
assert.NoError(t, err, cmn.Fmt("expected not to err on height %d", h)) | |||
assert.True(t, found, cmn.Fmt("expected to find end height for %d", h)) | |||
assert.NotNil(t, gr, "expected group not to be nil") | |||
defer gr.Close() | |||
dec := NewWALDecoder(gr) | |||
msg, err := dec.Decode() | |||
assert.NoError(t, err, "expected to decode a message") | |||
rs, ok := msg.Msg.(tmtypes.EventDataRoundState) | |||
assert.True(t, ok, "expected message of type EventDataRoundState") | |||
assert.Equal(t, rs.Height, h+1, cmn.Fmt("wrong height")) | |||
} | |||
/* | |||
var initOnce sync.Once | |||
func registerInterfacesOnce() { | |||
initOnce.Do(func() { | |||
var _ = wire.RegisterInterface( | |||
struct{ WALMessage }{}, | |||
wire.ConcreteType{[]byte{}, 0x10}, | |||
) | |||
}) | |||
} | |||
*/ | |||
func nBytes(n int) []byte { | |||
buf := make([]byte, n) | |||
n, _ = rand.Read(buf) | |||
return buf[:n] | |||
} | |||
func benchmarkWalDecode(b *testing.B, n int) { | |||
// registerInterfacesOnce() | |||
buf := new(bytes.Buffer) | |||
enc := NewWALEncoder(buf) | |||
data := nBytes(n) | |||
enc.Encode(&TimedWALMessage{Msg: data, Time: time.Now().Round(time.Second)}) | |||
encoded := buf.Bytes() | |||
b.ResetTimer() | |||
for i := 0; i < b.N; i++ { | |||
buf.Reset() | |||
buf.Write(encoded) | |||
dec := NewWALDecoder(buf) | |||
if _, err := dec.Decode(); err != nil { | |||
b.Fatal(err) | |||
} | |||
} | |||
b.ReportAllocs() | |||
} | |||
func BenchmarkWalDecode512B(b *testing.B) { | |||
benchmarkWalDecode(b, 512) | |||
} | |||
func BenchmarkWalDecode10KB(b *testing.B) { | |||
benchmarkWalDecode(b, 10*1024) | |||
} | |||
func BenchmarkWalDecode100KB(b *testing.B) { | |||
benchmarkWalDecode(b, 100*1024) | |||
} | |||
func BenchmarkWalDecode1MB(b *testing.B) { | |||
benchmarkWalDecode(b, 1024*1024) | |||
} | |||
func BenchmarkWalDecode10MB(b *testing.B) { | |||
benchmarkWalDecode(b, 10*1024*1024) | |||
} | |||
func BenchmarkWalDecode100MB(b *testing.B) { | |||
benchmarkWalDecode(b, 100*1024*1024) | |||
} | |||
func BenchmarkWalDecode1GB(b *testing.B) { | |||
benchmarkWalDecode(b, 1024*1024*1024) | |||
} |
@ -1,14 +0,0 @@ | |||
package consensus | |||
import ( | |||
"github.com/tendermint/go-amino" | |||
"github.com/tendermint/go-crypto" | |||
) | |||
var cdc = amino.NewCodec() | |||
func init() { | |||
RegisterConsensusMessages(cdc) | |||
RegisterWALMessages(cdc) | |||
crypto.RegisterAmino(cdc) | |||
} |
@ -1,68 +0,0 @@ | |||
version: '3' | |||
services: | |||
node0: | |||
container_name: node0 | |||
image: "tendermint/localnode" | |||
ports: | |||
- "26656-26657:26656-26657" | |||
environment: | |||
- ID=0 | |||
- LOG=$${LOG:-tendermint.log} | |||
volumes: | |||
- ./build:/tendermint:Z | |||
networks: | |||
localnet: | |||
ipv4_address: 192.167.10.2 | |||
node1: | |||
container_name: node1 | |||
image: "tendermint/localnode" | |||
ports: | |||
- "26659-26660:26656-26657" | |||
environment: | |||
- ID=1 | |||
- LOG=$${LOG:-tendermint.log} | |||
volumes: | |||
- ./build:/tendermint:Z | |||
networks: | |||
localnet: | |||
ipv4_address: 192.167.10.3 | |||
node2: | |||
container_name: node2 | |||
image: "tendermint/localnode" | |||
environment: | |||
- ID=2 | |||
- LOG=$${LOG:-tendermint.log} | |||
ports: | |||
- "26661-26662:26656-26657" | |||
volumes: | |||
- ./build:/tendermint:Z | |||
networks: | |||
localnet: | |||
ipv4_address: 192.167.10.4 | |||
node3: | |||
container_name: node3 | |||
image: "tendermint/localnode" | |||
environment: | |||
- ID=3 | |||
- LOG=$${LOG:-tendermint.log} | |||
ports: | |||
- "26663-26664:26656-26657" | |||
volumes: | |||
- ./build:/tendermint:Z | |||
networks: | |||
localnet: | |||
ipv4_address: 192.167.10.5 | |||
networks: | |||
localnet: | |||
driver: bridge | |||
ipam: | |||
driver: default | |||
config: | |||
- | |||
subnet: 192.167.10.0/16 | |||
@ -1 +0,0 @@ | |||
2.7.14 |
@ -1,23 +0,0 @@ | |||
# Minimal makefile for Sphinx documentation | |||
# | |||
# You can set these variables from the command line. | |||
SPHINXOPTS = | |||
SPHINXBUILD = python -msphinx | |||
SPHINXPROJ = Tendermint | |||
SOURCEDIR = . | |||
BUILDDIR = _build | |||
# Put it first so that "make" without argument is like "make help". | |||
help: | |||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) | |||
install: | |||
@pip install -r requirements.txt | |||
.PHONY: help Makefile | |||
# Catch-all target: route all unknown targets to Sphinx using the new | |||
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). | |||
%: Makefile | |||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) |
@ -1,14 +0,0 @@ | |||
Here lies our documentation. After making edits, run: | |||
``` | |||
pip install -r requirements.txt | |||
make html | |||
``` | |||
to build the docs locally then open the file `_build/html/index.html` in your browser. | |||
**WARNING:** This documentation is intended to be viewed at: | |||
https://tendermint.readthedocs.io | |||
and may contain broken internal links when viewed from Github. |
@ -1,17 +0,0 @@ | |||
.toggle { | |||
padding-bottom: 1em ; | |||
} | |||
.toggle .header { | |||
display: block; | |||
clear: both; | |||
cursor: pointer; | |||
} | |||
.toggle .header:after { | |||
content: " ▼"; | |||
} | |||
.toggle .header.open:after { | |||
content: " ▲"; | |||
} |
@ -1,10 +0,0 @@ | |||
let makeCodeBlocksCollapsible = function() { | |||
$(".toggle > *").hide(); | |||
$(".toggle .header").show(); | |||
$(".toggle .header").click(function() { | |||
$(this).parent().children().not(".header").toggle({"duration": 400}); | |||
$(this).parent().children(".header").toggleClass("open"); | |||
}); | |||
}; | |||
// we could use the }(); way if we would have access to jQuery in HEAD, i.e. we would need to force the theme | |||
// to load jQuery before our custom scripts |
@ -1,20 +0,0 @@ | |||
{% extends "!layout.html" %} | |||
{% set css_files = css_files + ["_static/custom_collapsible_code.css"] %} | |||
# sadly, I didn't find a css style way to add custom JS to a list that is automagically added to head like CSS (above) #} | |||
{% block extrahead %} | |||
<script type="text/javascript" src="_static/custom_collapsible_code.js"></script> | |||
{% endblock %} | |||
{% block footer %} | |||
<script type="text/javascript"> | |||
$(document).ready(function() { | |||
// using this approach as we don't have access to the jQuery selectors | |||
// when executing the function on load in HEAD | |||
makeCodeBlocksCollapsible(); | |||
}); | |||
</script> | |||
{% endblock %} | |||
@ -1,329 +0,0 @@ | |||
# Using ABCI-CLI | |||
To facilitate testing and debugging of ABCI servers and simple apps, we | |||
built a CLI, the `abci-cli`, for sending ABCI messages from the command | |||
line. | |||
## Install | |||
Make sure you [have Go installed](https://golang.org/doc/install). | |||
Next, install the `abci-cli` tool and example applications: | |||
go get -u github.com/tendermint/abci/cmd/abci-cli | |||
If this fails, you may need to use [dep](https://github.com/golang/dep) | |||
to get vendored dependencies: | |||
cd $GOPATH/src/github.com/tendermint/abci | |||
make get_tools | |||
make get_vendor_deps | |||
make install | |||
Now run `abci-cli` to see the list of commands: | |||
Usage: | |||
abci-cli [command] | |||
Available Commands: | |||
batch Run a batch of abci commands against an application | |||
check_tx Validate a tx | |||
commit Commit the application state and return the Merkle root hash | |||
console Start an interactive abci console for multiple commands | |||
counter ABCI demo example | |||
deliver_tx Deliver a new tx to the application | |||
kvstore ABCI demo example | |||
echo Have the application echo a message | |||
help Help about any command | |||
info Get some info about the application | |||
query Query the application state | |||
set_option Set an options on the application | |||
Flags: | |||
--abci string socket or grpc (default "socket") | |||
--address string address of application socket (default "tcp://127.0.0.1:26658") | |||
-h, --help help for abci-cli | |||
-v, --verbose print the command and results as if it were a console session | |||
Use "abci-cli [command] --help" for more information about a command. | |||
## KVStore - First Example | |||
The `abci-cli` tool lets us send ABCI messages to our application, to | |||
help build and debug them. | |||
The most important messages are `deliver_tx`, `check_tx`, and `commit`, | |||
but there are others for convenience, configuration, and information | |||
purposes. | |||
We'll start a kvstore application, which was installed at the same time | |||
as `abci-cli` above. The kvstore just stores transactions in a merkle | |||
tree. | |||
Its code can be found | |||
[here](https://github.com/tendermint/abci/blob/master/cmd/abci-cli/abci-cli.go) | |||
and looks like: | |||
func cmdKVStore(cmd *cobra.Command, args []string) error { | |||
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout)) | |||
// Create the application - in memory or persisted to disk | |||
var app types.Application | |||
if flagPersist == "" { | |||
app = kvstore.NewKVStoreApplication() | |||
} else { | |||
app = kvstore.NewPersistentKVStoreApplication(flagPersist) | |||
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore")) | |||
} | |||
// Start the listener | |||
srv, err := server.NewServer(flagAddrD, flagAbci, app) | |||
if err != nil { | |||
return err | |||
} | |||
srv.SetLogger(logger.With("module", "abci-server")) | |||
if err := srv.Start(); err != nil { | |||
return err | |||
} | |||
// Wait forever | |||
cmn.TrapSignal(func() { | |||
// Cleanup | |||
srv.Stop() | |||
}) | |||
return nil | |||
} | |||
Start by running: | |||
abci-cli kvstore | |||
And in another terminal, run | |||
abci-cli echo hello | |||
abci-cli info | |||
You'll see something like: | |||
-> data: hello | |||
-> data.hex: 68656C6C6F | |||
and: | |||
-> data: {"size":0} | |||
-> data.hex: 7B2273697A65223A307D | |||
An ABCI application must provide two things: | |||
- a socket server | |||
- a handler for ABCI messages | |||
When we run the `abci-cli` tool we open a new connection to the | |||
application's socket server, send the given ABCI message, and wait for a | |||
response. | |||
The server may be generic for a particular language, and we provide a | |||
[reference implementation in | |||
Golang](https://github.com/tendermint/abci/tree/master/server). See the | |||
[list of other ABCI implementations](./ecosystem.html) for servers in | |||
other languages. | |||
The handler is specific to the application, and may be arbitrary, so | |||
long as it is deterministic and conforms to the ABCI interface | |||
specification. | |||
So when we run `abci-cli info`, we open a new connection to the ABCI | |||
server, which calls the `Info()` method on the application, which tells | |||
us the number of transactions in our Merkle tree. | |||
Now, since every command opens a new connection, we provide the | |||
`abci-cli console` and `abci-cli batch` commands, to allow multiple ABCI | |||
messages to be sent over a single connection. | |||
Running `abci-cli console` should drop you in an interactive console for | |||
speaking ABCI messages to your application. | |||
Try running these commands: | |||
> echo hello | |||
-> code: OK | |||
-> data: hello | |||
-> data.hex: 0x68656C6C6F | |||
> info | |||
-> code: OK | |||
-> data: {"size":0} | |||
-> data.hex: 0x7B2273697A65223A307D | |||
> commit | |||
-> code: OK | |||
-> data.hex: 0x0000000000000000 | |||
> deliver_tx "abc" | |||
-> code: OK | |||
> info | |||
-> code: OK | |||
-> data: {"size":1} | |||
-> data.hex: 0x7B2273697A65223A317D | |||
> commit | |||
-> code: OK | |||
-> data.hex: 0x0200000000000000 | |||
> query "abc" | |||
-> code: OK | |||
-> log: exists | |||
-> height: 0 | |||
-> value: abc | |||
-> value.hex: 616263 | |||
> deliver_tx "def=xyz" | |||
-> code: OK | |||
> commit | |||
-> code: OK | |||
-> data.hex: 0x0400000000000000 | |||
> query "def" | |||
-> code: OK | |||
-> log: exists | |||
-> height: 0 | |||
-> value: xyz | |||
-> value.hex: 78797A | |||
Note that if we do `deliver_tx "abc"` it will store `(abc, abc)`, but if | |||
we do `deliver_tx "abc=efg"` it will store `(abc, efg)`. | |||
Similarly, you could put the commands in a file and run | |||
`abci-cli --verbose batch < myfile`. | |||
## Counter - Another Example | |||
Now that we've got the hang of it, let's try another application, the | |||
"counter" app. | |||
Like the kvstore app, its code can be found | |||
[here](https://github.com/tendermint/abci/blob/master/cmd/abci-cli/abci-cli.go) | |||
and looks like: | |||
func cmdCounter(cmd *cobra.Command, args []string) error { | |||
app := counter.NewCounterApplication(flagSerial) | |||
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout)) | |||
// Start the listener | |||
srv, err := server.NewServer(flagAddrC, flagAbci, app) | |||
if err != nil { | |||
return err | |||
} | |||
srv.SetLogger(logger.With("module", "abci-server")) | |||
if err := srv.Start(); err != nil { | |||
return err | |||
} | |||
// Wait forever | |||
cmn.TrapSignal(func() { | |||
// Cleanup | |||
srv.Stop() | |||
}) | |||
return nil | |||
} | |||
The counter app doesn't use a Merkle tree, it just counts how many times | |||
we've sent a transaction, asked for a hash, or committed the state. The | |||
result of `commit` is just the number of transactions sent. | |||
This application has two modes: `serial=off` and `serial=on`. | |||
When `serial=on`, transactions must be a big-endian encoded incrementing | |||
integer, starting at 0. | |||
If `serial=off`, there are no restrictions on transactions. | |||
We can toggle the value of `serial` using the `set_option` ABCI message. | |||
When `serial=on`, some transactions are invalid. In a live blockchain, | |||
transactions collect in memory before they are committed into blocks. To | |||
avoid wasting resources on invalid transactions, ABCI provides the | |||
`check_tx` message, which application developers can use to accept or | |||
reject transactions, before they are stored in memory or gossipped to | |||
other peers. | |||
In this instance of the counter app, `check_tx` only allows transactions | |||
whose integer is greater than the last committed one. | |||
Let's kill the console and the kvstore application, and start the | |||
counter app: | |||
abci-cli counter | |||
In another window, start the `abci-cli console`: | |||
> set_option serial on | |||
-> code: OK | |||
-> log: OK (SetOption doesn't return anything.) | |||
> check_tx 0x00 | |||
-> code: OK | |||
> check_tx 0xff | |||
-> code: OK | |||
> deliver_tx 0x00 | |||
-> code: OK | |||
> check_tx 0x00 | |||
-> code: BadNonce | |||
-> log: Invalid nonce. Expected >= 1, got 0 | |||
> deliver_tx 0x01 | |||
-> code: OK | |||
> deliver_tx 0x04 | |||
-> code: BadNonce | |||
-> log: Invalid nonce. Expected 2, got 4 | |||
> info | |||
-> code: OK | |||
-> data: {"hashes":0,"txs":2} | |||
-> data.hex: 0x7B22686173686573223A302C22747873223A327D | |||
This is a very simple application, but between `counter` and `kvstore`, | |||
its easy to see how you can build out arbitrary application states on | |||
top of the ABCI. [Hyperledger's | |||
Burrow](https://github.com/hyperledger/burrow) also runs atop ABCI, | |||
bringing with it Ethereum-like accounts, the Ethereum virtual-machine, | |||
Monax's permissioning scheme, and native contracts extensions. | |||
But the ultimate flexibility comes from being able to write the | |||
application easily in any language. | |||
We have implemented the counter in a number of languages [see the | |||
example directory](https://github.com/tendermint/abci/tree/master/example). | |||
To run the Node JS version, `cd` to `example/js` and run | |||
node app.js | |||
(you'll have to kill the other counter application process). In another | |||
window, run the console and those previous ABCI commands. You should get | |||
the same results as for the Go version. | |||
## Bounties | |||
Want to write the counter app in your favorite language?! We'd be happy | |||
to add you to our [ecosystem](https://tendermint.com/ecosystem)! We're | |||
also offering [bounties](https://hackerone.com/tendermint/) for | |||
implementations in new languages! | |||
The `abci-cli` is designed strictly for testing and debugging. In a real | |||
deployment, the role of sending messages is taken by Tendermint, which | |||
connects to the app using three separate connections, each with its own | |||
pattern of messages. | |||
For more information, see the [application developers | |||
guide](./app-development.html). For examples of running an ABCI app with | |||
Tendermint, see the [getting started guide](./getting-started.html). | |||
Next is the ABCI specification. |
@ -1,50 +0,0 @@ | |||
# Application Architecture Guide | |||
Here we provide a brief guide on the recommended architecture of a | |||
Tendermint blockchain application. | |||
The following diagram provides a superb example: | |||
<https://drive.google.com/open?id=1yR2XpRi9YCY9H9uMfcw8-RMJpvDyvjz9> | |||
The end-user application here is the Cosmos Voyager, at the bottom left. | |||
Voyager communicates with a REST API exposed by a local Light-Client | |||
Daemon. The Light-Client Daemon is an application specific program that | |||
communicates with Tendermint nodes and verifies Tendermint light-client | |||
proofs through the Tendermint Core RPC. The Tendermint Core process | |||
communicates with a local ABCI application, where the user query or | |||
transaction is actually processed. | |||
The ABCI application must be a deterministic result of the Tendermint | |||
consensus - any external influence on the application state that didn't | |||
come through Tendermint could cause a consensus failure. Thus *nothing* | |||
should communicate with the application except Tendermint via ABCI. | |||
If the application is written in Go, it can be compiled into the | |||
Tendermint binary. Otherwise, it should use a unix socket to communicate | |||
with Tendermint. If it's necessary to use TCP, extra care must be taken | |||
to encrypt and authenticate the connection. | |||
All reads from the app happen through the Tendermint `/abci_query` | |||
endpoint. All writes to the app happen through the Tendermint | |||
`/broadcast_tx_*` endpoints. | |||
The Light-Client Daemon is what provides light clients (end users) with | |||
nearly all the security of a full node. It formats and broadcasts | |||
transactions, and verifies proofs of queries and transaction results. | |||
Note that it need not be a daemon - the Light-Client logic could instead | |||
be implemented in the same process as the end-user application. | |||
Note for those ABCI applications with weaker security requirements, the | |||
functionality of the Light-Client Daemon can be moved into the ABCI | |||
application process itself. That said, exposing the application process | |||
to anything besides Tendermint over ABCI requires extreme caution, as | |||
all transactions, and possibly all queries, should still pass through | |||
Tendermint. | |||
See the following for more extensive documentation: | |||
- [Interchain Standard for the Light-Client REST API](https://github.com/cosmos/cosmos-sdk/pull/1028) | |||
- [Tendermint RPC Docs](https://tendermint.github.io/slate/) | |||
- [Tendermint in Production](https://github.com/tendermint/tendermint/pull/1618) | |||
- [Tendermint Basics](https://tendermint.readthedocs.io/en/master/using-tendermint.html) | |||
- [ABCI spec](https://github.com/tendermint/abci/blob/develop/specification.md) |
@ -1,527 +0,0 @@ | |||
# Application Development Guide | |||
## ABCI Design | |||
The purpose of ABCI is to provide a clean interface between state | |||
transition machines on one computer and the mechanics of their | |||
replication across multiple computers. The former we call 'application | |||
logic' and the latter the 'consensus engine'. Application logic | |||
validates transactions and optionally executes transactions against some | |||
persistent state. A consensus engine ensures all transactions are | |||
replicated in the same order on every machine. We call each machine in a | |||
consensus engine a 'validator', and each validator runs the same | |||
transactions through the same application logic. In particular, we are | |||
interested in blockchain-style consensus engines, where transactions are | |||
committed in hash-linked blocks. | |||
The ABCI design has a few distinct components: | |||
- message protocol | |||
- pairs of request and response messages | |||
- consensus makes requests, application responds | |||
- defined using protobuf | |||
- server/client | |||
- consensus engine runs the client | |||
- application runs the server | |||
- two implementations: | |||
- async raw bytes | |||
- grpc | |||
- blockchain protocol | |||
- abci is connection oriented | |||
- Tendermint Core maintains three connections: | |||
- [mempool connection](#mempool-connection): for checking if | |||
transactions should be relayed before they are committed; | |||
only uses `CheckTx` | |||
- [consensus connection](#consensus-connection): for executing | |||
transactions that have been committed. Message sequence is | |||
-for every block | |||
-`BeginBlock, [DeliverTx, ...], EndBlock, Commit` | |||
- [query connection](#query-connection): for querying the | |||
application state; only uses Query and Info | |||
The mempool and consensus logic act as clients, and each maintains an | |||
open ABCI connection with the application, which hosts an ABCI server. | |||
Shown are the request and response types sent on each connection. | |||
## Message Protocol | |||
The message protocol consists of pairs of requests and responses. Some | |||
messages have no fields, while others may include byte-arrays, strings, | |||
or integers. See the `message Request` and `message Response` | |||
definitions in [the protobuf definition | |||
file](https://github.com/tendermint/abci/blob/master/types/types.proto), | |||
and the [protobuf | |||
documentation](https://developers.google.com/protocol-buffers/docs/overview) | |||
for more details. | |||
For each request, a server should respond with the corresponding | |||
response, where order of requests is preserved in the order of | |||
responses. | |||
## Server | |||
To use ABCI in your programming language of choice, there must be a ABCI | |||
server in that language. Tendermint supports two kinds of implementation | |||
of the server: | |||
- Asynchronous, raw socket server (Tendermint Socket Protocol, also | |||
known as TSP or Teaspoon) | |||
- GRPC | |||
Both can be tested using the `abci-cli` by setting the `--abci` flag | |||
appropriately (ie. to `socket` or `grpc`). | |||
See examples, in various stages of maintenance, in | |||
[Go](https://github.com/tendermint/abci/tree/master/server), | |||
[JavaScript](https://github.com/tendermint/js-abci), | |||
[Python](https://github.com/tendermint/abci/tree/master/example/python3/abci), | |||
[C++](https://github.com/mdyring/cpp-tmsp), and | |||
[Java](https://github.com/jTendermint/jabci). | |||
### GRPC | |||
If GRPC is available in your language, this is the easiest approach, | |||
though it will have significant performance overhead. | |||
To get started with GRPC, copy in the [protobuf | |||
file](https://github.com/tendermint/abci/blob/master/types/types.proto) | |||
and compile it using the GRPC plugin for your language. For instance, | |||
for golang, the command is `protoc --go_out=plugins=grpc:. types.proto`. | |||
See the [grpc documentation for more details](http://www.grpc.io/docs/). | |||
`protoc` will autogenerate all the necessary code for ABCI client and | |||
server in your language, including whatever interface your application | |||
must satisfy to be used by the ABCI server for handling requests. | |||
### TSP | |||
If GRPC is not available in your language, or you require higher | |||
performance, or otherwise enjoy programming, you may implement your own | |||
ABCI server using the Tendermint Socket Protocol, known affectionately | |||
as Teaspoon. The first step is still to auto-generate the relevant data | |||
types and codec in your language using `protoc`. Messages coming over | |||
the socket are Protobuf3 encoded, but additionally length-prefixed to | |||
facilitate use as a streaming protocol. Protobuf3 doesn't have an | |||
official length-prefix standard, so we use our own. The first byte in | |||
the prefix represents the length of the Big Endian encoded length. The | |||
remaining bytes in the prefix are the Big Endian encoded length. | |||
For example, if the Protobuf3 encoded ABCI message is 0xDEADBEEF (4 | |||
bytes), the length-prefixed message is 0x0104DEADBEEF. If the Protobuf3 | |||
encoded ABCI message is 65535 bytes long, the length-prefixed message | |||
would be like 0x02FFFF.... | |||
Note this prefixing does not apply for grpc. | |||
An ABCI server must also be able to support multiple connections, as | |||
Tendermint uses three connections. | |||
## Client | |||
There are currently two use-cases for an ABCI client. One is a testing | |||
tool, as in the `abci-cli`, which allows ABCI requests to be sent via | |||
command line. The other is a consensus engine, such as Tendermint Core, | |||
which makes requests to the application every time a new transaction is | |||
received or a block is committed. | |||
It is unlikely that you will need to implement a client. For details of | |||
our client, see | |||
[here](https://github.com/tendermint/abci/tree/master/client). | |||
Most of the examples below are from [kvstore | |||
application](https://github.com/tendermint/abci/blob/master/example/kvstore/kvstore.go), | |||
which is a part of the abci repo. [persistent_kvstore | |||
application](https://github.com/tendermint/abci/blob/master/example/kvstore/persistent_kvstore.go) | |||
is used to show `BeginBlock`, `EndBlock` and `InitChain` example | |||
implementations. | |||
## Blockchain Protocol | |||
In ABCI, a transaction is simply an arbitrary length byte-array. It is | |||
the application's responsibility to define the transaction codec as they | |||
please, and to use it for both CheckTx and DeliverTx. | |||
Note that there are two distinct means for running transactions, | |||
corresponding to stages of 'awareness' of the transaction in the | |||
network. The first stage is when a transaction is received by a | |||
validator from a client into the so-called mempool or transaction pool | |||
-this is where we use CheckTx. The second is when the transaction is | |||
successfully committed on more than 2/3 of validators - where we use | |||
DeliverTx. In the former case, it may not be necessary to run all the | |||
state transitions associated with the transaction, as the transaction | |||
may not ultimately be committed until some much later time, when the | |||
result of its execution will be different. For instance, an Ethereum | |||
ABCI app would check signatures and amounts in CheckTx, but would not | |||
actually execute any contract code until the DeliverTx, so as to avoid | |||
executing state transitions that have not been finalized. | |||
To formalize the distinction further, two explicit ABCI connections are | |||
made between Tendermint Core and the application: the mempool connection | |||
and the consensus connection. We also make a third connection, the query | |||
connection, to query the local state of the app. | |||
### Mempool Connection | |||
The mempool connection is used *only* for CheckTx requests. Transactions | |||
are run using CheckTx in the same order they were received by the | |||
validator. If the CheckTx returns `OK`, the transaction is kept in | |||
memory and relayed to other peers in the same order it was received. | |||
Otherwise, it is discarded. | |||
CheckTx requests run concurrently with block processing; so they should | |||
run against a copy of the main application state which is reset after | |||
every block. This copy is necessary to track transitions made by a | |||
sequence of CheckTx requests before they are included in a block. When a | |||
block is committed, the application must ensure to reset the mempool | |||
state to the latest committed state. Tendermint Core will then filter | |||
through all transactions in the mempool, removing any that were included | |||
in the block, and re-run the rest using CheckTx against the post-Commit | |||
mempool state (this behaviour can be turned off with | |||
`[mempool] recheck = false`). | |||
In go: | |||
func (app *KVStoreApplication) CheckTx(tx []byte) types.Result { | |||
return types.OK | |||
} | |||
In Java: | |||
ResponseCheckTx requestCheckTx(RequestCheckTx req) { | |||
byte[] transaction = req.getTx().toByteArray(); | |||
// validate transaction | |||
if (notValid) { | |||
return ResponseCheckTx.newBuilder().setCode(CodeType.BadNonce).setLog("invalid tx").build(); | |||
} else { | |||
return ResponseCheckTx.newBuilder().setCode(CodeType.OK).build(); | |||
} | |||
} | |||
### Replay Protection | |||
To prevent old transactions from being replayed, CheckTx must implement | |||
replay protection. | |||
Tendermint provides the first defence layer by keeping a lightweight | |||
in-memory cache of 100k (`[mempool] cache_size`) last transactions in | |||
the mempool. If Tendermint is just started or the clients sent more than | |||
100k transactions, old transactions may be sent to the application. So | |||
it is important CheckTx implements some logic to handle them. | |||
There are cases where a transaction will (or may) become valid in some | |||
future state, in which case you probably want to disable Tendermint's | |||
cache. You can do that by setting `[mempool] cache_size = 0` in the | |||
config. | |||
### Consensus Connection | |||
The consensus connection is used only when a new block is committed, and | |||
communicates all information from the block in a series of requests: | |||
`BeginBlock, [DeliverTx, ...], EndBlock, Commit`. That is, when a block | |||
is committed in the consensus, we send a list of DeliverTx requests (one | |||
for each transaction) sandwiched by BeginBlock and EndBlock requests, | |||
and followed by a Commit. | |||
### DeliverTx | |||
DeliverTx is the workhorse of the blockchain. Tendermint sends the | |||
DeliverTx requests asynchronously but in order, and relies on the | |||
underlying socket protocol (ie. TCP) to ensure they are received by the | |||
app in order. They have already been ordered in the global consensus by | |||
the Tendermint protocol. | |||
DeliverTx returns a abci.Result, which includes a Code, Data, and Log. | |||
The code may be non-zero (non-OK), meaning the corresponding transaction | |||
should have been rejected by the mempool, but may have been included in | |||
a block by a Byzantine proposer. | |||
The block header will be updated (TODO) to include some commitment to | |||
the results of DeliverTx, be it a bitarray of non-OK transactions, or a | |||
merkle root of the data returned by the DeliverTx requests, or both. | |||
In go: | |||
// tx is either "key=value" or just arbitrary bytes | |||
func (app *KVStoreApplication) DeliverTx(tx []byte) types.Result { | |||
parts := strings.Split(string(tx), "=") | |||
if len(parts) == 2 { | |||
app.state.Set([]byte(parts[0]), []byte(parts[1])) | |||
} else { | |||
app.state.Set(tx, tx) | |||
} | |||
return types.OK | |||
} | |||
In Java: | |||
/** | |||
* Using Protobuf types from the protoc compiler, we always start with a byte[] | |||
*/ | |||
ResponseDeliverTx deliverTx(RequestDeliverTx request) { | |||
byte[] transaction = request.getTx().toByteArray(); | |||
// validate your transaction | |||
if (notValid) { | |||
return ResponseDeliverTx.newBuilder().setCode(CodeType.BadNonce).setLog("transaction was invalid").build(); | |||
} else { | |||
ResponseDeliverTx.newBuilder().setCode(CodeType.OK).build(); | |||
} | |||
} | |||
### Commit | |||
Once all processing of the block is complete, Tendermint sends the | |||
Commit request and blocks waiting for a response. While the mempool may | |||
run concurrently with block processing (the BeginBlock, DeliverTxs, and | |||
EndBlock), it is locked for the Commit request so that its state can be | |||
safely reset during Commit. This means the app *MUST NOT* do any | |||
blocking communication with the mempool (ie. broadcast\_tx) during | |||
Commit, or there will be deadlock. Note also that all remaining | |||
transactions in the mempool are replayed on the mempool connection | |||
(CheckTx) following a commit. | |||
The app should respond to the Commit request with a byte array, which is | |||
the deterministic state root of the application. It is included in the | |||
header of the next block. It can be used to provide easily verified | |||
Merkle-proofs of the state of the application. | |||
It is expected that the app will persist state to disk on Commit. The | |||
option to have all transactions replayed from some previous block is the | |||
job of the [Handshake](#handshake). | |||
In go: | |||
func (app *KVStoreApplication) Commit() types.Result { | |||
hash := app.state.Hash() | |||
return types.NewResultOK(hash, "") | |||
} | |||
In Java: | |||
ResponseCommit requestCommit(RequestCommit requestCommit) { | |||
// update the internal app-state | |||
byte[] newAppState = calculateAppState(); | |||
// and return it to the node | |||
return ResponseCommit.newBuilder().setCode(CodeType.OK).setData(ByteString.copyFrom(newAppState)).build(); | |||
} | |||
### BeginBlock | |||
The BeginBlock request can be used to run some code at the beginning of | |||
every block. It also allows Tendermint to send the current block hash | |||
and header to the application, before it sends any of the transactions. | |||
The app should remember the latest height and header (ie. from which it | |||
has run a successful Commit) so that it can tell Tendermint where to | |||
pick up from when it restarts. See information on the Handshake, below. | |||
In go: | |||
// Track the block hash and header information | |||
func (app *PersistentKVStoreApplication) BeginBlock(params types.RequestBeginBlock) { | |||
// update latest block info | |||
app.blockHeader = params.Header | |||
// reset valset changes | |||
app.changes = make([]*types.Validator, 0) | |||
} | |||
In Java: | |||
/* | |||
* all types come from protobuf definition | |||
*/ | |||
ResponseBeginBlock requestBeginBlock(RequestBeginBlock req) { | |||
Header header = req.getHeader(); | |||
byte[] prevAppHash = header.getAppHash().toByteArray(); | |||
long prevHeight = header.getHeight(); | |||
long numTxs = header.getNumTxs(); | |||
// run your pre-block logic. Maybe prepare a state snapshot, message components, etc | |||
return ResponseBeginBlock.newBuilder().build(); | |||
} | |||
### EndBlock | |||
The EndBlock request can be used to run some code at the end of every | |||
block. Additionally, the response may contain a list of validators, | |||
which can be used to update the validator set. To add a new validator or | |||
update an existing one, simply include them in the list returned in the | |||
EndBlock response. To remove one, include it in the list with a `power` | |||
equal to `0`. Tendermint core will take care of updating the validator | |||
set. Note the change in voting power must be strictly less than 1/3 per | |||
block if you want a light client to be able to prove the transition | |||
externally. See the [light client | |||
docs](https://godoc.org/github.com/tendermint/tendermint/lite#hdr-How_We_Track_Validators) | |||
for details on how it tracks validators. | |||
In go: | |||
// Update the validator set | |||
func (app *PersistentKVStoreApplication) EndBlock(req types.RequestEndBlock) types.ResponseEndBlock { | |||
return types.ResponseEndBlock{ValidatorUpdates: app.ValUpdates} | |||
} | |||
In Java: | |||
/* | |||
* Assume that one validator changes. The new validator has a power of 10 | |||
*/ | |||
ResponseEndBlock requestEndBlock(RequestEndBlock req) { | |||
final long currentHeight = req.getHeight(); | |||
final byte[] validatorPubKey = getValPubKey(); | |||
ResponseEndBlock.Builder builder = ResponseEndBlock.newBuilder(); | |||
builder.addDiffs(1, Types.Validator.newBuilder().setPower(10L).setPubKey(ByteString.copyFrom(validatorPubKey)).build()); | |||
return builder.build(); | |||
} | |||
### Query Connection | |||
This connection is used to query the application without engaging | |||
consensus. It's exposed over the tendermint core rpc, so clients can | |||
query the app without exposing a server on the app itself, but they must | |||
serialize each query as a single byte array. Additionally, certain | |||
"standardized" queries may be used to inform local decisions, for | |||
instance about which peers to connect to. | |||
Tendermint Core currently uses the Query connection to filter peers upon | |||
connecting, according to IP address or public key. For instance, | |||
returning non-OK ABCI response to either of the following queries will | |||
cause Tendermint to not connect to the corresponding peer: | |||
- `p2p/filter/addr/<addr>`, where `<addr>` is an IP address. | |||
- `p2p/filter/pubkey/<pubkey>`, where `<pubkey>` is the hex-encoded | |||
ED25519 key of the node (not it's validator key) | |||
Note: these query formats are subject to change! | |||
In go: | |||
func (app *KVStoreApplication) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) { | |||
if reqQuery.Prove { | |||
value, proof, exists := app.state.Proof(reqQuery.Data) | |||
resQuery.Index = -1 // TODO make Proof return index | |||
resQuery.Key = reqQuery.Data | |||
resQuery.Value = value | |||
resQuery.Proof = proof | |||
if exists { | |||
resQuery.Log = "exists" | |||
} else { | |||
resQuery.Log = "does not exist" | |||
} | |||
return | |||
} else { | |||
index, value, exists := app.state.Get(reqQuery.Data) | |||
resQuery.Index = int64(index) | |||
resQuery.Value = value | |||
if exists { | |||
resQuery.Log = "exists" | |||
} else { | |||
resQuery.Log = "does not exist" | |||
} | |||
return | |||
} | |||
} | |||
In Java: | |||
ResponseQuery requestQuery(RequestQuery req) { | |||
final boolean isProveQuery = req.getProve(); | |||
final ResponseQuery.Builder responseBuilder = ResponseQuery.newBuilder(); | |||
if (isProveQuery) { | |||
com.app.example.ProofResult proofResult = generateProof(req.getData().toByteArray()); | |||
final byte[] proofAsByteArray = proofResult.getAsByteArray(); | |||
responseBuilder.setProof(ByteString.copyFrom(proofAsByteArray)); | |||
responseBuilder.setKey(req.getData()); | |||
responseBuilder.setValue(ByteString.copyFrom(proofResult.getData())); | |||
responseBuilder.setLog(result.getLogValue()); | |||
} else { | |||
byte[] queryData = req.getData().toByteArray(); | |||
final com.app.example.QueryResult result = generateQueryResult(queryData); | |||
responseBuilder.setIndex(result.getIndex()); | |||
responseBuilder.setValue(ByteString.copyFrom(result.getValue())); | |||
responseBuilder.setLog(result.getLogValue()); | |||
} | |||
return responseBuilder.build(); | |||
} | |||
### Handshake | |||
When the app or tendermint restarts, they need to sync to a common | |||
height. When an ABCI connection is first established, Tendermint will | |||
call `Info` on the Query connection. The response should contain the | |||
LastBlockHeight and LastBlockAppHash - the former is the last block for | |||
which the app ran Commit successfully, the latter is the response from | |||
that Commit. | |||
Using this information, Tendermint will determine what needs to be | |||
replayed, if anything, against the app, to ensure both Tendermint and | |||
the app are synced to the latest block height. | |||
If the app returns a LastBlockHeight of 0, Tendermint will just replay | |||
all blocks. | |||
In go: | |||
func (app *KVStoreApplication) Info(req types.RequestInfo) (resInfo types.ResponseInfo) { | |||
return types.ResponseInfo{Data: cmn.Fmt("{\"size\":%v}", app.state.Size())} | |||
} | |||
In Java: | |||
ResponseInfo requestInfo(RequestInfo req) { | |||
final byte[] lastAppHash = getLastAppHash(); | |||
final long lastHeight = getLastHeight(); | |||
return ResponseInfo.newBuilder().setLastBlockAppHash(ByteString.copyFrom(lastAppHash)).setLastBlockHeight(lastHeight).build(); | |||
} | |||
### Genesis | |||
`InitChain` will be called once upon the genesis. `params` includes the | |||
initial validator set. Later on, it may be extended to take parts of the | |||
consensus params. | |||
In go: | |||
// Save the validators in the merkle tree | |||
func (app *PersistentKVStoreApplication) InitChain(params types.RequestInitChain) { | |||
for _, v := range params.Validators { | |||
r := app.updateValidator(v) | |||
if r.IsErr() { | |||
app.logger.Error("Error updating validators", "r", r) | |||
} | |||
} | |||
} | |||
In Java: | |||
/* | |||
* all types come from protobuf definition | |||
*/ | |||
ResponseInitChain requestInitChain(RequestInitChain req) { | |||
final int validatorsCount = req.getValidatorsCount(); | |||
final List<Types.Validator> validatorsList = req.getValidatorsList(); | |||
validatorsList.forEach((validator) -> { | |||
long power = validator.getPower(); | |||
byte[] validatorPubKey = validator.getPubKey().toByteArray(); | |||
// do somehing for validator setup in app | |||
}); | |||
return ResponseInitChain.newBuilder().build(); | |||
} |
@ -1,5 +0,0 @@ | |||
# Architecture Decision Records | |||
This is a location to record all high-level architecture decisions in the tendermint project. Not the implementation details, but the reasoning that happened. This should be refered to for guidance of the "right way" to extend the application. And if we notice that the original decisions were lacking, we should have another open discussion, record the new decisions here, and then modify the code to match. | |||
Read up on the concept in this [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t). |
@ -1,216 +0,0 @@ | |||
# ADR 1: Logging | |||
## Context | |||
Current logging system in Tendermint is very static and not flexible enough. | |||
Issues: [358](https://github.com/tendermint/tendermint/issues/358), [375](https://github.com/tendermint/tendermint/issues/375). | |||
What we want from the new system: | |||
- per package dynamic log levels | |||
- dynamic logger setting (logger tied to the processing struct) | |||
- conventions | |||
- be more visually appealing | |||
"dynamic" here means the ability to set smth in runtime. | |||
## Decision | |||
### 1) An interface | |||
First, we will need an interface for all of our libraries (`tmlibs`, Tendermint, etc.). My personal preference is go-kit `Logger` interface (see Appendix A.), but that is too much a bigger change. Plus we will still need levels. | |||
```go | |||
# log.go | |||
type Logger interface { | |||
Debug(msg string, keyvals ...interface{}) error | |||
Info(msg string, keyvals ...interface{}) error | |||
Error(msg string, keyvals ...interface{}) error | |||
With(keyvals ...interface{}) Logger | |||
} | |||
``` | |||
On a side note: difference between `Info` and `Notice` is subtle. We probably | |||
could do without `Notice`. Don't think we need `Panic` or `Fatal` as a part of | |||
the interface. These funcs could be implemented as helpers. In fact, we already | |||
have some in `tmlibs/common`. | |||
- `Debug` - extended output for devs | |||
- `Info` - all that is useful for a user | |||
- `Error` - errors | |||
`Notice` should become `Info`, `Warn` either `Error` or `Debug` depending on the message, `Crit` -> `Error`. | |||
This interface should go into `tmlibs/log`. All libraries which are part of the core (tendermint/tendermint) should obey it. | |||
### 2) Logger with our current formatting | |||
On top of this interface, we will need to implement a stdout logger, which will be used when Tendermint is configured to output logs to STDOUT. | |||
Many people say that they like the current output, so let's stick with it. | |||
``` | |||
NOTE[04-25|14:45:08] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0 | |||
``` | |||
Couple of minor changes: | |||
``` | |||
I[04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0 | |||
``` | |||
Notice the level is encoded using only one char plus milliseconds. | |||
Note: there are many other formats out there like [logfmt](https://brandur.org/logfmt). | |||
This logger could be implemented using any logger - [logrus](https://github.com/sirupsen/logrus), [go-kit/log](https://github.com/go-kit/kit/tree/master/log), [zap](https://github.com/uber-go/zap), log15 so far as it | |||
a) supports coloring output<br> | |||
b) is moderately fast (buffering) <br> | |||
c) conforms to the new interface or adapter could be written for it <br> | |||
d) is somewhat configurable<br> | |||
go-kit is my favorite so far. Check out how easy it is to color errors in red https://github.com/go-kit/kit/blob/master/log/term/example_test.go#L12. Although, coloring could only be applied to the whole string :( | |||
``` | |||
go-kit +: flexible, modular | |||
go-kit “-”: logfmt format https://brandur.org/logfmt | |||
logrus +: popular, feature rich (hooks), API and output is more like what we want | |||
logrus -: not so flexible | |||
``` | |||
```go | |||
# tm_logger.go | |||
// NewTmLogger returns a logger that encodes keyvals to the Writer in | |||
// tm format. | |||
func NewTmLogger(w io.Writer) Logger { | |||
return &tmLogger{kitlog.NewLogfmtLogger(w)} | |||
} | |||
func (l tmLogger) SetLevel(level string() { | |||
switch (level) { | |||
case "debug": | |||
l.sourceLogger = level.NewFilter(l.sourceLogger, level.AllowDebug()) | |||
} | |||
} | |||
func (l tmLogger) Info(msg string, keyvals ...interface{}) error { | |||
l.sourceLogger.Log("msg", msg, keyvals...) | |||
} | |||
# log.go | |||
func With(logger Logger, keyvals ...interface{}) Logger { | |||
kitlog.With(logger.sourceLogger, keyvals...) | |||
} | |||
``` | |||
Usage: | |||
```go | |||
logger := log.NewTmLogger(os.Stdout) | |||
logger.SetLevel(config.GetString("log_level")) | |||
node.SetLogger(log.With(logger, "node", Name)) | |||
``` | |||
**Other log formatters** | |||
In the future, we may want other formatters like JSONFormatter. | |||
``` | |||
{ "level": "notice", "time": "2017-04-25 14:45:08.562471297 -0400 EDT", "module": "consensus", "msg": "ABCI Replay Blocks", "appHeight": 0, "storeHeight": 0, "stateHeight": 0 } | |||
``` | |||
### 3) Dynamic logger setting | |||
https://dave.cheney.net/2017/01/23/the-package-level-logger-anti-pattern | |||
This is the hardest part and where the most work will be done. logger should be tied to the processing struct, or the context if it adds some fields to the logger. | |||
```go | |||
type BaseService struct { | |||
log log15.Logger | |||
name string | |||
started uint32 // atomic | |||
stopped uint32 // atomic | |||
... | |||
} | |||
``` | |||
BaseService already contains `log` field, so most of the structs embedding it should be fine. We should rename it to `logger`. | |||
The only thing missing is the ability to set logger: | |||
``` | |||
func (bs *BaseService) SetLogger(l log.Logger) { | |||
bs.logger = l | |||
} | |||
``` | |||
### 4) Conventions | |||
Important keyvals should go first. Example: | |||
``` | |||
correct | |||
I[04-25|14:45:08.322] ABCI Replay Blocks module=consensus instance=1 appHeight=0 storeHeight=0 stateHeight=0 | |||
``` | |||
not | |||
``` | |||
wrong | |||
I[04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0 instance=1 | |||
``` | |||
for that in most cases you'll need to add `instance` field to a logger upon creating, not when u log a particular message: | |||
```go | |||
colorFn := func(keyvals ...interface{}) term.FgBgColor { | |||
for i := 1; i < len(keyvals); i += 2 { | |||
if keyvals[i] == "instance" && keyvals[i+1] == "1" { | |||
return term.FgBgColor{Fg: term.Blue} | |||
} else if keyvals[i] == "instance" && keyvals[i+1] == "1" { | |||
return term.FgBgColor{Fg: term.Red} | |||
} | |||
} | |||
return term.FgBgColor{} | |||
} | |||
logger := term.NewLogger(os.Stdout, log.NewTmLogger, colorFn) | |||
c1 := NewConsensusReactor(...) | |||
c1.SetLogger(log.With(logger, "instance", 1)) | |||
c2 := NewConsensusReactor(...) | |||
c2.SetLogger(log.With(logger, "instance", 2)) | |||
``` | |||
## Status | |||
proposed | |||
## Consequences | |||
### Positive | |||
Dynamic logger, which could be turned off for some modules at runtime. Public interface for other projects using Tendermint libraries. | |||
### Negative | |||
We may loose the ability to color keys in keyvalue pairs. go-kit allow you to easily change foreground / background colors of the whole string, but not its parts. | |||
### Neutral | |||
## Appendix A. | |||
I really like a minimalistic approach go-kit took with his logger https://github.com/go-kit/kit/tree/master/log: | |||
``` | |||
type Logger interface { | |||
Log(keyvals ...interface{}) error | |||
} | |||
``` | |||
See [The Hunt for a Logger Interface](https://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide). The advantage is greater composability (check out how go-kit defines colored logging or log-leveled logging on top of this interface https://github.com/go-kit/kit/tree/master/log). |
@ -1,90 +0,0 @@ | |||
# ADR 2: Event Subscription | |||
## Context | |||
In the light client (or any other client), the user may want to **subscribe to | |||
a subset of transactions** (rather than all of them) using `/subscribe?event=X`. For | |||
example, I want to subscribe for all transactions associated with a particular | |||
account. Same for fetching. The user may want to **fetch transactions based on | |||
some filter** (rather than fetching all the blocks). For example, I want to get | |||
all transactions for a particular account in the last two weeks (`tx's block | |||
time >= '2017-06-05'`). | |||
Now you can't even subscribe to "all txs" in Tendermint. | |||
The goal is a simple and easy to use API for doing that. | |||
![Tx Send Flow Diagram](img/tags1.png) | |||
## Decision | |||
ABCI app return tags with a `DeliverTx` response inside the `data` field (_for | |||
now, later we may create a separate field_). Tags is a list of key-value pairs, | |||
protobuf encoded. | |||
Example data: | |||
```json | |||
{ | |||
"abci.account.name": "Igor", | |||
"abci.account.address": "0xdeadbeef", | |||
"tx.gas": 7 | |||
} | |||
``` | |||
### Subscribing for transactions events | |||
If the user wants to receive only a subset of transactions, ABCI-app must | |||
return a list of tags with a `DeliverTx` response. These tags will be parsed and | |||
matched with the current queries (subscribers). If the query matches the tags, | |||
subscriber will get the transaction event. | |||
``` | |||
/subscribe?query="tm.event = Tx AND tx.hash = AB0023433CF0334223212243BDD AND abci.account.invoice.number = 22" | |||
``` | |||
A new package must be developed to replace the current `events` package. It | |||
will allow clients to subscribe to a different types of events in the future: | |||
``` | |||
/subscribe?query="abci.account.invoice.number = 22" | |||
/subscribe?query="abci.account.invoice.owner CONTAINS Igor" | |||
``` | |||
### Fetching transactions | |||
This is a bit tricky because a) we want to support a number of indexers, all of | |||
which have a different API b) we don't know whenever tags will be sufficient | |||
for the most apps (I guess we'll see). | |||
``` | |||
/txs/search?query="tx.hash = AB0023433CF0334223212243BDD AND abci.account.owner CONTAINS Igor" | |||
/txs/search?query="abci.account.owner = Igor" | |||
``` | |||
For historic queries we will need a indexing storage (Postgres, SQLite, ...). | |||
### Issues | |||
- https://github.com/tendermint/basecoin/issues/91 | |||
- https://github.com/tendermint/tendermint/issues/376 | |||
- https://github.com/tendermint/tendermint/issues/287 | |||
- https://github.com/tendermint/tendermint/issues/525 (related) | |||
## Status | |||
proposed | |||
## Consequences | |||
### Positive | |||
- same format for event notifications and search APIs | |||
- powerful enough query | |||
### Negative | |||
- performance of the `match` function (where we have too many queries / subscribers) | |||
- there is an issue where there are too many txs in the DB | |||
### Neutral |