Browse Source

ci: add markdown linter (#146)

pull/7804/head
Marko 4 years ago
committed by GitHub
parent
commit
efbbc9462f
No known key found for this signature in database GPG Key ID: 4AEE18F83AFDEB23
33 changed files with 949 additions and 902 deletions
  1. +28
    -0
      .github/workflows/linter.yml
  2. +7
    -0
      .markdownlint.yml
  3. +12
    -13
      README.md
  4. +20
    -20
      rfc/001-block-retention.md
  5. +10
    -10
      rfc/002-nonzero-genesis.md
  6. +255
    -257
      spec/abci/abci.md
  7. +24
    -24
      spec/abci/apps.md
  8. +28
    -29
      spec/consensus/bft-time.md
  9. +3
    -4
      spec/consensus/consensus-paper/README.md
  10. +2
    -2
      spec/consensus/consensus.md
  11. +10
    -10
      spec/consensus/creating-proposal.md
  12. +4
    -5
      spec/consensus/light-client/README.md
  13. +24
    -47
      spec/consensus/light-client/accountability.md
  14. +21
    -29
      spec/consensus/light-client/verification.md
  15. +1
    -1
      spec/consensus/readme.md
  16. +30
    -30
      spec/consensus/signing.md
  17. +109
    -109
      spec/core/data_structures.md
  18. +46
    -46
      spec/core/encoding.md
  19. +22
    -22
      spec/core/state.md
  20. +1
    -2
      spec/p2p/config.md
  21. +4
    -4
      spec/p2p/connection.md
  22. +10
    -10
      spec/p2p/peer.md
  23. +85
    -64
      spec/reactors/block_sync/bcv1/impl-v1.md
  24. +20
    -21
      spec/reactors/block_sync/impl.md
  25. +16
    -16
      spec/reactors/block_sync/reactor.md
  26. +48
    -46
      spec/reactors/consensus/consensus-reactor.md
  27. +7
    -7
      spec/reactors/consensus/consensus.md
  28. +78
    -50
      spec/reactors/consensus/proposer-selection.md
  29. +1
    -1
      spec/reactors/mempool/config.md
  30. +1
    -1
      spec/reactors/mempool/functionality.md
  31. +1
    -1
      spec/reactors/mempool/reactor.md
  32. +2
    -2
      spec/reactors/pex/pex.md
  33. +19
    -19
      spec/reactors/state_sync/reactor.md

+ 28
- 0
.github/workflows/linter.yml View File

@ -0,0 +1,28 @@
name: Lint
on:
push:
branches:
- master
paths:
- "**.md"
pull_request:
branches: [master]
paths:
- "**.md"
jobs:
build:
name: Super linter
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v2
- name: Lint Code Base
uses: docker://github/super-linter:v3
env:
LINTER_RULES_PATH: .
VALIDATE_ALL_CODEBASE: true
DEFAULT_BRANCH: master
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
VALIDATE_MD: true
MARKDOWN_CONFIG_FILE: .markdownlint.yml

+ 7
- 0
.markdownlint.yml View File

@ -0,0 +1,7 @@
default: true
MD007: { indent: 4 }
MD013: false
MD024: { siblings_only: true }
MD025: false
MD033: false
MD036: false

+ 12
- 13
README.md View File

@ -1,25 +1,24 @@
# Tendermint Spec
This repository contains specifications for the Tendermint protocol. For the pdf, see the [latest release](https://github.com/tendermint/spec/releases).
There are currently two implementations of the Tendermint protocol,
There are currently two implementations of the Tendermint protocol,
maintained by two separate-but-collaborative entities:
One in [Go](https://github.com/tendermint/tendermint),
maintained by Interchain GmbH,
One in [Go](https://github.com/tendermint/tendermint),
maintained by Interchain GmbH,
and one in [Rust](https://github.com/informalsystems/tendermint-rs),
maintained by Informal Systems.
maintained by Informal Systems.
There have been inadvertent divergences in the specs followed
by the Go implementation and the Rust implementation respectively.
However, we are worked to reconverge these specs into a single unified spec.
There have been inadvertent divergences in the specs followed
by the Go implementation and the Rust implementation respectively.
However, we are worked to reconverge these specs into a single unified spec.
Consequently, this repository is in a bit of a state of flux.
At the moment, the spec followed by the Go implementation
(tendermint/tendermint) is in the [spec](spec) directory,
while the spec followed by the Rust implementation
At the moment, the spec followed by the Go implementation
(tendermint/tendermint) is in the [spec](spec) directory,
while the spec followed by the Rust implementation
(informalsystems/tendermint-rs) is in the rust-spec
directory. TLA+ specifications are also in the rust-spec directory.
Over time, these specs will converge in the spec directory.
Once they have fully converged, we will version the spec moving forward.
Over time, these specs will converge in the spec directory.
Once they have fully converged, we will version the spec moving forward.

+ 20
- 20
rfc/001-block-retention.md View File

@ -15,21 +15,21 @@
Currently, all Tendermint nodes contain the complete sequence of blocks from genesis up to some height (typically the latest chain height). This will no longer be true when the following features are released:
* [Block pruning](https://github.com/tendermint/tendermint/issues/3652): removes historical blocks and associated data (e.g. validator sets) up to some height, keeping only the most recent blocks.
- [Block pruning](https://github.com/tendermint/tendermint/issues/3652): removes historical blocks and associated data (e.g. validator sets) up to some height, keeping only the most recent blocks.
* [State sync](https://github.com/tendermint/tendermint/issues/828): bootstraps a new node by syncing state machine snapshots at a given height, but not historical blocks and associated data.
- [State sync](https://github.com/tendermint/tendermint/issues/828): bootstraps a new node by syncing state machine snapshots at a given height, but not historical blocks and associated data.
To maintain the integrity of the chain, the use of these features must be coordinated such that necessary historical blocks will not become unavailable or lost forever. In particular:
* Some nodes should have complete block histories, for auditability, querying, and bootstrapping.
- Some nodes should have complete block histories, for auditability, querying, and bootstrapping.
* The majority of nodes should retain blocks longer than the Cosmos SDK unbonding period, for light client verification.
- The majority of nodes should retain blocks longer than the Cosmos SDK unbonding period, for light client verification.
* Some nodes must take and serve state sync snapshots with snapshot intervals less than the block retention periods, to allow new nodes to state sync and then replay blocks to catch up.
- Some nodes must take and serve state sync snapshots with snapshot intervals less than the block retention periods, to allow new nodes to state sync and then replay blocks to catch up.
* Applications may not persist their state on commit, and require block replay on restart.
- Applications may not persist their state on commit, and require block replay on restart.
* Only a minority of nodes can be state synced within the unbonding period, for light client verification and to serve block histories for catch-up.
- Only a minority of nodes can be state synced within the unbonding period, for light client verification and to serve block histories for catch-up.
However, it is unclear if and how we should enforce this. It may not be possible to technically enforce all of these without knowing the state of the entire network, but it may also be unrealistic to expect this to be enforced entirely through social coordination. This is especially unfortunate since the consequences of misconfiguration can be permanent chain-wide data loss.
@ -65,13 +65,13 @@ As an example, we'll consider how the Cosmos SDK might make use of this. The spe
The returned `retain_height` would be the lowest height that satisfies:
* Unbonding time: the time interval in which validators can be economically punished for misbehavior. Blocks in this interval must be auditable e.g. by the light client.
- Unbonding time: the time interval in which validators can be economically punished for misbehavior. Blocks in this interval must be auditable e.g. by the light client.
* IAVL snapshot interval: the block interval at which the underlying IAVL database is persisted to disk, e.g. every 10000 heights. Blocks since the last IAVL snapshot must be available for replay on application restart.
- IAVL snapshot interval: the block interval at which the underlying IAVL database is persisted to disk, e.g. every 10000 heights. Blocks since the last IAVL snapshot must be available for replay on application restart.
* State sync snapshots: blocks since the _oldest_ available snapshot must be available for state sync nodes to catch up (oldest because a node may be restoring an old snapshot while a new snapshot was taken).
- State sync snapshots: blocks since the _oldest_ available snapshot must be available for state sync nodes to catch up (oldest because a node may be restoring an old snapshot while a new snapshot was taken).
* Local config: archive nodes may want to retain more or all blocks, e.g. via a local config option `min-retain-blocks`. There may also be a need to vary rentention for other nodes, e.g. sentry nodes which do not need historical blocks.
- Local config: archive nodes may want to retain more or all blocks, e.g. via a local config option `min-retain-blocks`. There may also be a need to vary rentention for other nodes, e.g. sentry nodes which do not need historical blocks.
![Cosmos SDK block retention diagram](images/block-retention.png)
@ -83,26 +83,26 @@ Accepted
### Positive
* Application-specified block retention allows the application to take all relevant factors into account and prevent necessary blocks from being accidentally removed.
- Application-specified block retention allows the application to take all relevant factors into account and prevent necessary blocks from being accidentally removed.
* Node operators can independently decide whether they want to provide complete block histories (if local configuration for this is provided) and snapshots.
- Node operators can independently decide whether they want to provide complete block histories (if local configuration for this is provided) and snapshots.
### Negative
* Social coordination is required to run archival nodes, failure to do so may lead to permanent loss of historical blocks.
- Social coordination is required to run archival nodes, failure to do so may lead to permanent loss of historical blocks.
* Social coordination is required to run snapshot nodes, failure to do so may lead to inability to run state sync, and inability to bootstrap new nodes at all if no archival nodes are online.
- Social coordination is required to run snapshot nodes, failure to do so may lead to inability to run state sync, and inability to bootstrap new nodes at all if no archival nodes are online.
### Neutral
* Reduced block retention requires application changes, and cannot be controlled directly in Tendermint.
- Reduced block retention requires application changes, and cannot be controlled directly in Tendermint.
* Application-specified block retention may set a lower bound on disk space requirements for all nodes.
- Application-specified block retention may set a lower bound on disk space requirements for all nodes.
## References
- State sync ADR: https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-053-state-sync-prototype.md
- State sync ADR: <https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-053-state-sync-prototype.md>
- State sync issue: https://github.com/tendermint/tendermint/issues/828
- State sync issue: <https://github.com/tendermint/tendermint/issues/828>
- Block pruning issue: https://github.com/tendermint/tendermint/issues/3652
- Block pruning issue: <https://github.com/tendermint/tendermint/issues/3652>

+ 10
- 10
rfc/002-nonzero-genesis.md View File

@ -26,7 +26,7 @@ wallets, that assume a monotonically increasing height for a given blockchain. U
it confusing that a given height can now refer to distinct states depending on the chain
version.
An ideal solution would be to always retain block backwards compatibility in such a way that chain
An ideal solution would be to always retain block backwards compatibility in such a way that chain
history is never lost on upgrades. However, this may require a significant amount of engineering
work that is not viable for the planned Stargate release (Tendermint 0.34), and may prove too
restrictive for future development.
@ -36,20 +36,20 @@ file would at least provide monotonically increasing heights. There was a propos
last block header of the previous chain as well, but since the genesis file is not verified and
hashed (only specific fields are) this would not be trustworthy.
External tooling will be required to map historical heights onto e.g. archive nodes that contain
External tooling will be required to map historical heights onto e.g. archive nodes that contain
blocks from previous chain version. Tendermint will not include any such functionality.
## Proposal
Tendermint will allow chains to start from an arbitrary initial height:
* A new field `initial_height` is added to the genesis file, defaulting to `1`. It can be set to any
- A new field `initial_height` is added to the genesis file, defaulting to `1`. It can be set to any
non-negative integer, and `0` is considered equivalent to `1`.
* A new field `InitialHeight` is added to the ABCI `RequestInitChain` message, with the same value
- A new field `InitialHeight` is added to the ABCI `RequestInitChain` message, with the same value
and semantics as the genesis field.
* A new field `InitialHeight` is added to the `state.State` struct, where `0` is considered invalid.
- A new field `InitialHeight` is added to the `state.State` struct, where `0` is considered invalid.
Including the field here simplifies implementation, since the genesis value does not have to be
propagated throughout the code base separately, but it is not strictly necessary.
@ -64,18 +64,18 @@ Accepted
### Positive
* Heights can be unique throughout the history of a "logical" chain, across hard fork upgrades.
- Heights can be unique throughout the history of a "logical" chain, across hard fork upgrades.
### Negative
* Upgrades still cause loss of block history.
- Upgrades still cause loss of block history.
* Integrators will have to map height ranges to specific archive nodes/networks to query history.
- Integrators will have to map height ranges to specific archive nodes/networks to query history.
### Neutral
* There is no explicit link to the last block of the previous chain.
- There is no explicit link to the last block of the previous chain.
## References
- [#2543: Allow genesis file to start from non-zero height w/ prev block header](https://github.com/tendermint/tendermint/issues/2543)
- [#2543: Allow genesis file to start from non-zero height w/ prev block header](https://github.com/tendermint/tendermint/issues/2543)

+ 255
- 257
spec/abci/abci.md View File

@ -65,34 +65,34 @@ Example:
```go
abci.ResponseDeliverTx{
// ...
Events: []abci.Event{
{
Type: "validator.provisions",
Attributes: kv.Pairs{
kv.Pair{Key: []byte("address"), Value: []byte("...")},
kv.Pair{Key: []byte("amount"), Value: []byte("...")},
kv.Pair{Key: []byte("balance"), Value: []byte("...")},
},
},
{
Type: "validator.provisions",
Attributes: kv.Pairs{
kv.Pair{Key: []byte("address"), Value: []byte("...")},
kv.Pair{Key: []byte("amount"), Value: []byte("...")},
kv.Pair{Key: []byte("balance"), Value: []byte("...")},
},
},
{
Type: "validator.slashed",
Attributes: kv.Pairs{
kv.Pair{Key: []byte("address"), Value: []byte("...")},
kv.Pair{Key: []byte("amount"), Value: []byte("...")},
kv.Pair{Key: []byte("reason"), Value: []byte("...")},
},
},
// ...
},
// ...
Events: []abci.Event{
{
Type: "validator.provisions",
Attributes: kv.Pairs{
kv.Pair{Key: []byte("address"), Value: []byte("...")},
kv.Pair{Key: []byte("amount"), Value: []byte("...")},
kv.Pair{Key: []byte("balance"), Value: []byte("...")},
},
},
{
Type: "validator.provisions",
Attributes: kv.Pairs{
kv.Pair{Key: []byte("address"), Value: []byte("...")},
kv.Pair{Key: []byte("amount"), Value: []byte("...")},
kv.Pair{Key: []byte("balance"), Value: []byte("...")},
},
},
{
Type: "validator.slashed",
Attributes: kv.Pairs{
kv.Pair{Key: []byte("address"), Value: []byte("...")},
kv.Pair{Key: []byte("amount"), Value: []byte("...")},
kv.Pair{Key: []byte("reason"), Value: []byte("...")},
},
},
// ...
},
}
```
@ -120,19 +120,19 @@ non-determinism must be fixed and the nodes restarted.
Sources of non-determinism in applications may include:
- Hardware failures
- Cosmic rays, overheating, etc.
- Cosmic rays, overheating, etc.
- Node-dependent state
- Random numbers
- Time
- Random numbers
- Time
- Underspecification
- Library version changes
- Race conditions
- Floating point numbers
- JSON serialization
- Iterating through hash-tables/maps/dictionaries
- Library version changes
- Race conditions
- Floating point numbers
- JSON serialization
- Iterating through hash-tables/maps/dictionaries
- External Sources
- Filesystem
- Network calls (eg. some external REST API service)
- Filesystem
- Network calls (eg. some external REST API service)
See [#56](https://github.com/tendermint/abci/issues/56) for original discussion.
@ -177,16 +177,16 @@ via light client.
### Echo
- **Request**:
- `Message (string)`: A string to echo back
- `Message (string)`: A string to echo back
- **Response**:
- `Message (string)`: The input string
- `Message (string)`: The input string
- **Usage**:
- Echo a string to test an abci client/server implementation
- Echo a string to test an abci client/server implementation
### Flush
- **Usage**:
- Signals that messages queued on the client should be flushed to
- Signals that messages queued on the client should be flushed to
the server. It is called periodically by the client
implementation to ensure asynchronous requests are actually
sent, and is called immediately to make a synchronous request,
@ -195,62 +195,62 @@ via light client.
### Info
- **Request**:
- `Version (string)`: The Tendermint software semantic version
- `BlockVersion (uint64)`: The Tendermint Block Protocol version
- `P2PVersion (uint64)`: The Tendermint P2P Protocol version
- `Version (string)`: The Tendermint software semantic version
- `BlockVersion (uint64)`: The Tendermint Block Protocol version
- `P2PVersion (uint64)`: The Tendermint P2P Protocol version
- **Response**:
- `Data (string)`: Some arbitrary information
- `Version (string)`: The application software semantic version
- `AppVersion (uint64)`: The application protocol version
- `LastBlockHeight (int64)`: Latest block for which the app has
- `Data (string)`: Some arbitrary information
- `Version (string)`: The application software semantic version
- `AppVersion (uint64)`: The application protocol version
- `LastBlockHeight (int64)`: Latest block for which the app has
called Commit
- `LastBlockAppHash ([]byte)`: Latest result of Commit
- `LastBlockAppHash ([]byte)`: Latest result of Commit
- **Usage**:
- Return information about the application state.
- Used to sync Tendermint with the application during a handshake
- Return information about the application state.
- Used to sync Tendermint with the application during a handshake
that happens on startup.
- The returned `AppVersion` will be included in the Header of every block.
- Tendermint expects `LastBlockAppHash` and `LastBlockHeight` to
- The returned `AppVersion` will be included in the Header of every block.
- Tendermint expects `LastBlockAppHash` and `LastBlockHeight` to
be updated during `Commit`, ensuring that `Commit` is never
called twice for the same block height.
### SetOption
- **Request**:
- `Key (string)`: Key to set
- `Value (string)`: Value to set for key
- `Key (string)`: Key to set
- `Value (string)`: Value to set for key
- **Response**:
- `Code (uint32)`: Response code
- `Log (string)`: The output of the application's logger. May
- `Code (uint32)`: Response code
- `Log (string)`: The output of the application's logger. May
be non-deterministic.
- `Info (string)`: Additional information. May
- `Info (string)`: Additional information. May
be non-deterministic.
- **Usage**:
- Set non-consensus critical application specific options.
- e.g. Key="min-fee", Value="100fermion" could set the minimum fee
- Set non-consensus critical application specific options.
- e.g. Key="min-fee", Value="100fermion" could set the minimum fee
required for CheckTx (but not DeliverTx - that would be
consensus critical).
### InitChain
- **Request**:
- `Time (google.protobuf.Timestamp)`: Genesis time.
- `ChainID (string)`: ID of the blockchain.
- `ConsensusParams (ConsensusParams)`: Initial consensus-critical parameters.
- `Validators ([]ValidatorUpdate)`: Initial genesis validators, sorted by voting power.
- `AppStateBytes ([]byte)`: Serialized initial application state. Amino-encoded JSON bytes.
- `InitialHeight (int64)`: Height of the initial block (typically `1`).
- `Time (google.protobuf.Timestamp)`: Genesis time.
- `ChainID (string)`: ID of the blockchain.
- `ConsensusParams (ConsensusParams)`: Initial consensus-critical parameters.
- `Validators ([]ValidatorUpdate)`: Initial genesis validators, sorted by voting power.
- `AppStateBytes ([]byte)`: Serialized initial application state. Amino-encoded JSON bytes.
- `InitialHeight (int64)`: Height of the initial block (typically `1`).
- **Response**:
- `ConsensusParams (ConsensusParams)`: Initial
- `ConsensusParams (ConsensusParams)`: Initial
consensus-critical parameters (optional).
- `Validators ([]ValidatorUpdate)`: Initial validator set (optional).
- `AppHash ([]byte)`: Initial application hash.
- `Validators ([]ValidatorUpdate)`: Initial validator set (optional).
- `AppHash ([]byte)`: Initial application hash.
- **Usage**:
- Called once upon genesis.
- If ResponseInitChain.Validators is empty, the initial validator set will be the RequestInitChain.Validators
- If ResponseInitChain.Validators is not empty, it will be the initial
- Called once upon genesis.
- If ResponseInitChain.Validators is empty, the initial validator set will be the RequestInitChain.Validators
- If ResponseInitChain.Validators is not empty, it will be the initial
validator set (regardless of what is in RequestInitChain.Validators).
- This allows the app to decide if it wants to accept the initial validator
- This allows the app to decide if it wants to accept the initial validator
set proposed by tendermint (ie. in the genesis file), or if it wants to use
a different one (perhaps computed based on some application specific
information in the genesis file).
@ -258,154 +258,154 @@ via light client.
### Query
- **Request**:
- `Data ([]byte)`: Raw query bytes. Can be used with or in lieu
- `Data ([]byte)`: Raw query bytes. Can be used with or in lieu
of Path.
- `Path (string)`: Path of request, like an HTTP GET path. Can be
- `Path (string)`: Path of request, like an HTTP GET path. Can be
used with or in liue of Data.
- Apps MUST interpret '/store' as a query by key on the
- Apps MUST interpret '/store' as a query by key on the
underlying store. The key SHOULD be specified in the Data field.
- Apps SHOULD allow queries over specific types like
- Apps SHOULD allow queries over specific types like
'/accounts/...' or '/votes/...'
- `Height (int64)`: The block height for which you want the query
- `Height (int64)`: The block height for which you want the query
(default=0 returns data for the latest committed block). Note
that this is the height of the block containing the
application's Merkle root hash, which represents the state as it
was after committing the block at Height-1
- `Prove (bool)`: Return Merkle proof with response if possible
- `Prove (bool)`: Return Merkle proof with response if possible
- **Response**:
- `Code (uint32)`: Response code.
- `Log (string)`: The output of the application's logger. May
- `Code (uint32)`: Response code.
- `Log (string)`: The output of the application's logger. May
be non-deterministic.
- `Info (string)`: Additional information. May
- `Info (string)`: Additional information. May
be non-deterministic.
- `Index (int64)`: The index of the key in the tree.
- `Key ([]byte)`: The key of the matching data.
- `Value ([]byte)`: The value of the matching data.
- `Proof (Proof)`: Serialized proof for the value data, if requested, to be
- `Index (int64)`: The index of the key in the tree.
- `Key ([]byte)`: The key of the matching data.
- `Value ([]byte)`: The value of the matching data.
- `Proof (Proof)`: Serialized proof for the value data, if requested, to be
verified against the `AppHash` for the given Height.
- `Height (int64)`: The block height from which data was derived.
- `Height (int64)`: The block height from which data was derived.
Note that this is the height of the block containing the
application's Merkle root hash, which represents the state as it
was after committing the block at Height-1
- `Codespace (string)`: Namespace for the `Code`.
- `Codespace (string)`: Namespace for the `Code`.
- **Usage**:
- Query for data from the application at current or past height.
- Optionally return Merkle proof.
- Merkle proof includes self-describing `type` field to support many types
- Query for data from the application at current or past height.
- Optionally return Merkle proof.
- Merkle proof includes self-describing `type` field to support many types
of Merkle trees and encoding formats.
### BeginBlock
- **Request**:
- `Hash ([]byte)`: The block's hash. This can be derived from the
- `Hash ([]byte)`: The block's hash. This can be derived from the
block header.
- `Header (struct{})`: The block header.
- `LastCommitInfo (LastCommitInfo)`: Info about the last commit, including the
- `Header (struct{})`: The block header.
- `LastCommitInfo (LastCommitInfo)`: Info about the last commit, including the
round, and the list of validators and which ones signed the last block.
- `ByzantineValidators ([]Evidence)`: List of evidence of
- `ByzantineValidators ([]Evidence)`: List of evidence of
validators that acted maliciously.
- **Response**:
- `Tags ([]kv.Pair)`: Key-Value tags for filtering and indexing
- `Tags ([]kv.Pair)`: Key-Value tags for filtering and indexing
- **Usage**:
- Signals the beginning of a new block. Called prior to
- Signals the beginning of a new block. Called prior to
any DeliverTxs.
- The header contains the height, timestamp, and more - it exactly matches the
- The header contains the height, timestamp, and more - it exactly matches the
Tendermint block header. We may seek to generalize this in the future.
- The `LastCommitInfo` and `ByzantineValidators` can be used to determine
- The `LastCommitInfo` and `ByzantineValidators` can be used to determine
rewards and punishments for the validators. NOTE validators here do not
include pubkeys.
### CheckTx
- **Request**:
- `Tx ([]byte)`: The request transaction bytes
- `Type (CheckTxType)`: What type of `CheckTx` request is this? At present,
- `Tx ([]byte)`: The request transaction bytes
- `Type (CheckTxType)`: What type of `CheckTx` request is this? At present,
there are two possible values: `CheckTx_New` (the default, which says
that a full check is required), and `CheckTx_Recheck` (when the mempool is
initiating a normal recheck of a transaction).
- **Response**:
- `Code (uint32)`: Response code
- `Data ([]byte)`: Result bytes, if any.
- `Log (string)`: The output of the application's logger. May
- `Code (uint32)`: Response code
- `Data ([]byte)`: Result bytes, if any.
- `Log (string)`: The output of the application's logger. May
be non-deterministic.
- `Info (string)`: Additional information. May
- `Info (string)`: Additional information. May
be non-deterministic.
- `GasWanted (int64)`: Amount of gas requested for transaction.
- `GasUsed (int64)`: Amount of gas consumed by transaction.
- `Tags ([]kv.Pair)`: Key-Value tags for filtering and indexing
- `GasWanted (int64)`: Amount of gas requested for transaction.
- `GasUsed (int64)`: Amount of gas consumed by transaction.
- `Tags ([]kv.Pair)`: Key-Value tags for filtering and indexing
transactions (eg. by account).
- `Codespace (string)`: Namespace for the `Code`.
- `Codespace (string)`: Namespace for the `Code`.
- **Usage**:
- Technically optional - not involved in processing blocks.
- Guardian of the mempool: every node runs CheckTx before letting a
- Technically optional - not involved in processing blocks.
- Guardian of the mempool: every node runs CheckTx before letting a
transaction into its local mempool.
- The transaction may come from an external user or another node
- CheckTx need not execute the transaction in full, but rather a light-weight
- The transaction may come from an external user or another node
- CheckTx need not execute the transaction in full, but rather a light-weight
yet stateful validation, like checking signatures and account balances, but
not running code in a virtual machine.
- Transactions where `ResponseCheckTx.Code != 0` will be rejected - they will not be broadcast to
- Transactions where `ResponseCheckTx.Code != 0` will be rejected - they will not be broadcast to
other nodes or included in a proposal block.
- Tendermint attributes no other value to the response code
- Tendermint attributes no other value to the response code
### DeliverTx
- **Request**:
- `Tx ([]byte)`: The request transaction bytes.
- `Tx ([]byte)`: The request transaction bytes.
- **Response**:
- `Code (uint32)`: Response code.
- `Data ([]byte)`: Result bytes, if any.
- `Log (string)`: The output of the application's logger. May
- `Code (uint32)`: Response code.
- `Data ([]byte)`: Result bytes, if any.
- `Log (string)`: The output of the application's logger. May
be non-deterministic.
- `Info (string)`: Additional information. May
- `Info (string)`: Additional information. May
be non-deterministic.
- `GasWanted (int64)`: Amount of gas requested for transaction.
- `GasUsed (int64)`: Amount of gas consumed by transaction.
- `Tags ([]kv.Pair)`: Key-Value tags for filtering and indexing
- `GasWanted (int64)`: Amount of gas requested for transaction.
- `GasUsed (int64)`: Amount of gas consumed by transaction.
- `Tags ([]kv.Pair)`: Key-Value tags for filtering and indexing
transactions (eg. by account).
- `Codespace (string)`: Namespace for the `Code`.
- `Codespace (string)`: Namespace for the `Code`.
- **Usage**:
- The workhorse of the application - non-optional.
- Execute the transaction in full.
- `ResponseDeliverTx.Code == 0` only if the transaction is fully valid.
- The workhorse of the application - non-optional.
- Execute the transaction in full.
- `ResponseDeliverTx.Code == 0` only if the transaction is fully valid.
### EndBlock
- **Request**:
- `Height (int64)`: Height of the block just executed.
- `Height (int64)`: Height of the block just executed.
- **Response**:
- `ValidatorUpdates ([]ValidatorUpdate)`: Changes to validator set (set
- `ValidatorUpdates ([]ValidatorUpdate)`: Changes to validator set (set
voting power to 0 to remove).
- `ConsensusParamUpdates (ConsensusParams)`: Changes to
- `ConsensusParamUpdates (ConsensusParams)`: Changes to
consensus-critical time, size, and other parameters.
- `Tags ([]kv.Pair)`: Key-Value tags for filtering and indexing
- `Tags ([]kv.Pair)`: Key-Value tags for filtering and indexing
- **Usage**:
- Signals the end of a block.
- Called after all transactions, prior to each Commit.
- Validator updates returned by block `H` impact blocks `H+1`, `H+2`, and
- Signals the end of a block.
- Called after all transactions, prior to each Commit.
- Validator updates returned by block `H` impact blocks `H+1`, `H+2`, and
`H+3`, but only effects changes on the validator set of `H+2`:
- `H+1`: NextValidatorsHash
- `H+2`: ValidatorsHash (and thus the validator set)
- `H+3`: LastCommitInfo (ie. the last validator set)
- Consensus params returned for block `H` apply for block `H+1`
- `H+1`: NextValidatorsHash
- `H+2`: ValidatorsHash (and thus the validator set)
- `H+3`: LastCommitInfo (ie. the last validator set)
- Consensus params returned for block `H` apply for block `H+1`
### Commit
- **Response**:
- `Data ([]byte)`: The Merkle root hash of the application state
- `RetainHeight (int64)`: Blocks below this height may be removed. Defaults
- `Data ([]byte)`: The Merkle root hash of the application state
- `RetainHeight (int64)`: Blocks below this height may be removed. Defaults
to `0` (retain all).
- **Usage**:
- Persist the application state.
- Return an (optional) Merkle root hash of the application state
- `ResponseCommit.Data` is included as the `Header.AppHash` in the next block
- it may be empty
- Later calls to `Query` can return proofs about the application state anchored
- Persist the application state.
- Return an (optional) Merkle root hash of the application state
- `ResponseCommit.Data` is included as the `Header.AppHash` in the next block
- it may be empty
- Later calls to `Query` can return proofs about the application state anchored
in this Merkle root hash
- Note developers can return whatever they want here (could be nothing, or a
- Note developers can return whatever they want here (could be nothing, or a
constant string, etc.), so long as it is deterministic - it must not be a
function of anything that did not come from the
BeginBlock/DeliverTx/EndBlock methods.
- Use `RetainHeight` with caution! If all nodes in the network remove historical
- Use `RetainHeight` with caution! If all nodes in the network remove historical
blocks then this data is permanently lost, and no new nodes will be able to
join the network and bootstrap. Historical blocks may also be required for
other purposes, e.g. auditing, replay of non-persisted heights, light client
@ -414,256 +414,254 @@ via light client.
### ListSnapshots
- **Response**:
- `Snapshots ([]Snapshot)`: List of local state snapshots.
- `Snapshots ([]Snapshot)`: List of local state snapshots.
- **Usage**:
- Used during state sync to discover available snapshots on peers.
- See `Snapshot` data type for details.
- Used during state sync to discover available snapshots on peers.
- See `Snapshot` data type for details.
### LoadSnapshotChunk
- **Request**:
- `Height (uint64)`: The height of the snapshot the chunks belongs to.
- `Format (uint32)`: The application-specific format of the snapshot the chunk belongs to.
- `Chunk (uint32)`: The chunk index, starting from `0` for the initial chunk.
- `Height (uint64)`: The height of the snapshot the chunks belongs to.
- `Format (uint32)`: The application-specific format of the snapshot the chunk belongs to.
- `Chunk (uint32)`: The chunk index, starting from `0` for the initial chunk.
- **Response**:
- `Chunk ([]byte)`: The binary chunk contents, in an arbitray format. Chunk messages cannot be
- `Chunk ([]byte)`: The binary chunk contents, in an arbitray format. Chunk messages cannot be
larger than 16 MB _including metadata_, so 10 MB is a good starting point.
- **Usage**:
- Used during state sync to retrieve snapshot chunks from peers.
- Used during state sync to retrieve snapshot chunks from peers.
### OfferSnapshot
- **Request**:
- `Snapshot (Snapshot)`: The snapshot offered for restoration.
- `AppHash ([]byte)`: The light client-verified app hash for this height, from the blockchain.
- `Snapshot (Snapshot)`: The snapshot offered for restoration.
- `AppHash ([]byte)`: The light client-verified app hash for this height, from the blockchain.
- **Response**:
- `Result (Result)`: The result of the snapshot offer.
- `ACCEPT`: Snapshot is accepted, start applying chunks.
- `ABORT`: Abort snapshot restoration, and don't try any other snapshots.
- `REJECT`: Reject this specific snapshot, try others.
- `REJECT_FORMAT`: Reject all snapshots with this `format`, try others.
- `REJECT_SENDERS`: Reject all snapshots from all senders of this snapshot, try others.
- `Result (Result)`: The result of the snapshot offer.
- `ACCEPT`: Snapshot is accepted, start applying chunks.
- `ABORT`: Abort snapshot restoration, and don't try any other snapshots.
- `REJECT`: Reject this specific snapshot, try others.
- `REJECT_FORMAT`: Reject all snapshots with this `format`, try others.
- `REJECT_SENDERS`: Reject all snapshots from all senders of this snapshot, try others.
- **Usage**:
- `OfferSnapshot` is called when bootstrapping a node using state sync. The application may
- `OfferSnapshot` is called when bootstrapping a node using state sync. The application may
accept or reject snapshots as appropriate. Upon accepting, Tendermint will retrieve and
apply snapshot chunks via `ApplySnapshotChunk`. The application may also choose to reject a
snapshot in the chunk response, in which case it should be prepared to accept further
`OfferSnapshot` calls.
- Only `AppHash` can be trusted, as it has been verified by the light client. Any other data
- Only `AppHash` can be trusted, as it has been verified by the light client. Any other data
can be spoofed by adversaries, so applications should employ additional verification schemes
to avoid denial-of-service attacks. The verified `AppHash` is automatically checked against
the restored application at the end of snapshot restoration.
- For more information, see the `Snapshot` data type or the [state sync section](apps.md#state-sync).
- For more information, see the `Snapshot` data type or the [state sync section](apps.md#state-sync).
### ApplySnapshotChunk
- **Request**:
- `Index (uint32)`: The chunk index, starting from `0`. Tendermint applies chunks sequentially.
- `Chunk ([]byte)`: The binary chunk contents, as returned by `LoadSnapshotChunk`.
- `Sender (string)`: The P2P ID of the node who sent this chunk.
- `Index (uint32)`: The chunk index, starting from `0`. Tendermint applies chunks sequentially.
- `Chunk ([]byte)`: The binary chunk contents, as returned by `LoadSnapshotChunk`.
- `Sender (string)`: The P2P ID of the node who sent this chunk.
- **Response**:
- `Result (Result)`: The result of applying this chunk.
- `ACCEPT`: The chunk was accepted.
- `ABORT`: Abort snapshot restoration, and don't try any other snapshots.
- `RETRY`: Reapply this chunk, combine with `RefetchChunks` and `RejectSenders` as appropriate.
- `RETRY_SNAPSHOT`: Restart this snapshot from `OfferSnapshot`, reusing chunks unless
- `Result (Result)`: The result of applying this chunk.
- `ACCEPT`: The chunk was accepted.
- `ABORT`: Abort snapshot restoration, and don't try any other snapshots.
- `RETRY`: Reapply this chunk, combine with `RefetchChunks` and `RejectSenders` as appropriate.
- `RETRY_SNAPSHOT`: Restart this snapshot from `OfferSnapshot`, reusing chunks unless
instructed otherwise.
- `REJECT_SNAPSHOT`: Reject this snapshot, try a different one.
- `RefetchChunks ([]uint32)`: Refetch and reapply the given chunks, regardless of `Result`. Only
- `REJECT_SNAPSHOT`: Reject this snapshot, try a different one.
- `RefetchChunks ([]uint32)`: Refetch and reapply the given chunks, regardless of `Result`. Only
the listed chunks will be refetched, and reapplied in sequential order.
- `RejectSenders ([]string)`: Reject the given P2P senders, regardless of `Result`. Any chunks
- `RejectSenders ([]string)`: Reject the given P2P senders, regardless of `Result`. Any chunks
already applied will not be refetched unless explicitly requested, but queued chunks from these senders will be discarded, and new chunks or other snapshots rejected.
- **Usage**:
- The application can choose to refetch chunks and/or ban P2P peers as appropriate. Tendermint
- The application can choose to refetch chunks and/or ban P2P peers as appropriate. Tendermint
will not do this unless instructed by the application.
- The application may want to verify each chunk, e.g. by attaching chunk hashes in
- The application may want to verify each chunk, e.g. by attaching chunk hashes in
`Snapshot.Metadata` and/or incrementally verifying contents against `AppHash`.
- When all chunks have been accepted, Tendermint will make an ABCI `Info` call to verify that
- When all chunks have been accepted, Tendermint will make an ABCI `Info` call to verify that
`LastBlockAppHash` and `LastBlockHeight` matches the expected values, and record the
`AppVersion` in the node state. It then switches to fast sync or consensus and joins the
network.
- If Tendermint is unable to retrieve the next chunk after some time (e.g. because no suitable
- If Tendermint is unable to retrieve the next chunk after some time (e.g. because no suitable
peers are available), it will reject the snapshot and try a different one via `OfferSnapshot`.
The application should be prepared to reset and accept it or abort as appropriate.
###
## Data Types
### Header
- **Fields**:
- `Version (Version)`: Version of the blockchain and the application
- `ChainID (string)`: ID of the blockchain
- `Height (int64)`: Height of the block in the chain
- `Time (google.protobuf.Timestamp)`: Time of the previous block.
- `Version (Version)`: Version of the blockchain and the application
- `ChainID (string)`: ID of the blockchain
- `Height (int64)`: Height of the block in the chain
- `Time (google.protobuf.Timestamp)`: Time of the previous block.
For most blocks it's the weighted median of the timestamps of the valid votes in the
block.LastCommit, except for the initial height where it's the genesis time.
- `LastBlockID (BlockID)`: Hash of the previous (parent) block
- `LastCommitHash ([]byte)`: Hash of the previous block's commit
- `ValidatorsHash ([]byte)`: Hash of the validator set for this block
- `NextValidatorsHash ([]byte)`: Hash of the validator set for the next block
- `ConsensusHash ([]byte)`: Hash of the consensus parameters for this block
- `AppHash ([]byte)`: Data returned by the last call to `Commit` - typically the
- `LastBlockID (BlockID)`: Hash of the previous (parent) block
- `LastCommitHash ([]byte)`: Hash of the previous block's commit
- `ValidatorsHash ([]byte)`: Hash of the validator set for this block
- `NextValidatorsHash ([]byte)`: Hash of the validator set for the next block
- `ConsensusHash ([]byte)`: Hash of the consensus parameters for this block
- `AppHash ([]byte)`: Data returned by the last call to `Commit` - typically the
Merkle root of the application state after executing the previous block's
transactions
- `LastResultsHash ([]byte)`: Root hash of all results from the txs from the previous block.
- `EvidenceHash ([]byte)`: Hash of the evidence included in this block
- `ProposerAddress ([]byte)`: Original proposer for the block
- `LastResultsHash ([]byte)`: Root hash of all results from the txs from the previous block.
- `EvidenceHash ([]byte)`: Hash of the evidence included in this block
- `ProposerAddress ([]byte)`: Original proposer for the block
- **Usage**:
- Provided in RequestBeginBlock
- Provides important context about the current state of the blockchain -
- Provided in RequestBeginBlock
- Provides important context about the current state of the blockchain -
especially height and time.
- Provides the proposer of the current block, for use in proposer-based
- Provides the proposer of the current block, for use in proposer-based
reward mechanisms.
- `LastResultsHash` is the root hash of a Merkle tree built from `ResponseDeliverTx` responses (`Log`, `Info`, `Codespace` and `Events` fields are ignored).
- `LastResultsHash` is the root hash of a Merkle tree built from `ResponseDeliverTx` responses (`Log`, `Info`, `Codespace` and `Events` fields are ignored).
### Version
- **Fields**:
- `Block (uint64)`: Protocol version of the blockchain data structures.
- `App (uint64)`: Protocol version of the application.
- `Block (uint64)`: Protocol version of the blockchain data structures.
- `App (uint64)`: Protocol version of the application.
- **Usage**:
- Block version should be static in the life of a blockchain.
- App version may be updated over time by the application.
- Block version should be static in the life of a blockchain.
- App version may be updated over time by the application.
### Validator
- **Fields**:
- `Address ([]byte)`: Address of the validator (the first 20 bytes of SHA256(public key))
- `Power (int64)`: Voting power of the validator
- `Address ([]byte)`: Address of the validator (the first 20 bytes of SHA256(public key))
- `Power (int64)`: Voting power of the validator
- **Usage**:
- Validator identified by address
- Used in RequestBeginBlock as part of VoteInfo
- Does not include PubKey to avoid sending potentially large quantum pubkeys
- Validator identified by address
- Used in RequestBeginBlock as part of VoteInfo
- Does not include PubKey to avoid sending potentially large quantum pubkeys
over the ABCI
### ValidatorUpdate
- **Fields**:
- `PubKey (PubKey)`: Public key of the validator
- `Power (int64)`: Voting power of the validator
- `PubKey (PubKey)`: Public key of the validator
- `Power (int64)`: Voting power of the validator
- **Usage**:
- Validator identified by PubKey
- Used to tell Tendermint to update the validator set
- Validator identified by PubKey
- Used to tell Tendermint to update the validator set
### VoteInfo
- **Fields**:
- `Validator (Validator)`: A validator
- `SignedLastBlock (bool)`: Indicates whether or not the validator signed
- `Validator (Validator)`: A validator
- `SignedLastBlock (bool)`: Indicates whether or not the validator signed
the last block
- **Usage**:
- Indicates whether a validator signed the last block, allowing for rewards
- Indicates whether a validator signed the last block, allowing for rewards
based on validator availability
### PubKey
- **Fields**:
- `Type (string)`: Type of the public key. A simple string like `"ed25519"`.
- `Type (string)`: Type of the public key. A simple string like `"ed25519"`.
In the future, may indicate a serialization algorithm to parse the `Data`,
for instance `"amino"`.
- `Data ([]byte)`: Public key data. For a simple public key, it's just the
- `Data ([]byte)`: Public key data. For a simple public key, it's just the
raw bytes. If the `Type` indicates an encoding algorithm, this is the
encoded public key.
- **Usage**:
- A generic and extensible typed public key
- A generic and extensible typed public key
### Evidence
- **Fields**:
- `Type (string)`: Type of the evidence. A hierarchical path like
- `Type (string)`: Type of the evidence. A hierarchical path like
"duplicate/vote".
- `Validator (Validator`: The offending validator
- `Height (int64)`: Height when the offense occured
- `Time (google.protobuf.Timestamp)`: Time of the block that was committed at the height that the offense occured
- `TotalVotingPower (int64)`: Total voting power of the validator set at
- `Validator (Validator`: The offending validator
- `Height (int64)`: Height when the offense occured
- `Time (google.protobuf.Timestamp)`: Time of the block that was committed at the height that the offense occured
- `TotalVotingPower (int64)`: Total voting power of the validator set at
height `Height`
### LastCommitInfo
- **Fields**:
- `Round (int32)`: Commit round.
- `Votes ([]VoteInfo)`: List of validators addresses in the last validator set
- `Round (int32)`: Commit round.
- `Votes ([]VoteInfo)`: List of validators addresses in the last validator set
with their voting power and whether or not they signed a vote.
### ConsensusParams
- **Fields**:
- `Block (BlockParams)`: Parameters limiting the size of a block and time between consecutive blocks.
- `Evidence (EvidenceParams)`: Parameters limiting the validity of
- `Block (BlockParams)`: Parameters limiting the size of a block and time between consecutive blocks.
- `Evidence (EvidenceParams)`: Parameters limiting the validity of
evidence of byzantine behaviour.
- `Validator (ValidatorParams)`: Parameters limiting the types of pubkeys validators can use.
- `Version (VersionParams)`: The ABCI application version.
- `Validator (ValidatorParams)`: Parameters limiting the types of pubkeys validators can use.
- `Version (VersionParams)`: The ABCI application version.
### BlockParams
- **Fields**:
- `MaxBytes (int64)`: Max size of a block, in bytes.
- `MaxGas (int64)`: Max sum of `GasWanted` in a proposed block.
- NOTE: blocks that violate this may be committed if there are Byzantine proposers.
- `MaxBytes (int64)`: Max size of a block, in bytes.
- `MaxGas (int64)`: Max sum of `GasWanted` in a proposed block.
- NOTE: blocks that violate this may be committed if there are Byzantine proposers.
It's the application's responsibility to handle this when processing a
block!
### EvidenceParams
- **Fields**:
- `MaxAgeNumBlocks (int64)`: Max age of evidence, in blocks.
- `MaxAgeDuration (time.Duration)`: Max age of evidence, in time.
- `MaxAgeNumBlocks (int64)`: Max age of evidence, in blocks.
- `MaxAgeDuration (time.Duration)`: Max age of evidence, in time.
It should correspond with an app's "unbonding period" or other similar
mechanism for handling [Nothing-At-Stake
attacks](https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ#what-is-the-nothing-at-stake-problem-and-how-can-it-be-fixed).
- Evidence older than `MaxAgeNumBlocks` && `MaxAgeDuration` is considered
- Evidence older than `MaxAgeNumBlocks` && `MaxAgeDuration` is considered
stale and ignored.
- In Cosmos-SDK based blockchains, `MaxAgeDuration` is usually equal to the
- In Cosmos-SDK based blockchains, `MaxAgeDuration` is usually equal to the
unbonding period. `MaxAgeNumBlocks` is calculated by dividing the unboding
period by the average block time (e.g. 2 weeks / 6s per block = 2d8h).
- `MaxNum (uint32)`: The maximum number of evidence that can be committed to a single block
- `ProofTrialPeriod (int64)`: The duration in terms of blocks that an indicted node has to
provide proof of correctly executing a lock change in the event of amnesia evidence.
- `MaxNum (uint32)`: The maximum number of evidence that can be committed to a single block
- `ProofTrialPeriod (int64)`: The duration in terms of blocks that an indicted node has to
provide proof of correctly executing a lock change in the event of amnesia evidence.
### ValidatorParams
- **Fields**:
- `PubKeyTypes ([]string)`: List of accepted pubkey types. Uses same
- `PubKeyTypes ([]string)`: List of accepted pubkey types. Uses same
naming as `PubKey.Type`.
### VersionParams
- **Fields**:
- `AppVersion (uint64)`: The ABCI application version.
- `AppVersion (uint64)`: The ABCI application version.
### Proof
- **Fields**:
- `Ops ([]ProofOp)`: List of chained Merkle proofs, of possibly different types
- The Merkle root of one op is the value being proven in the next op.
- The Merkle root of the final op should equal the ultimate root hash being
- `Ops ([]ProofOp)`: List of chained Merkle proofs, of possibly different types
- The Merkle root of one op is the value being proven in the next op.
- The Merkle root of the final op should equal the ultimate root hash being
verified against.
### ProofOp
- **Fields**:
- `Type (string)`: Type of Merkle proof and how it's encoded.
- `Key ([]byte)`: Key in the Merkle tree that this proof is for.
- `Data ([]byte)`: Encoded Merkle proof for the key.
- `Type (string)`: Type of Merkle proof and how it's encoded.
- `Key ([]byte)`: Key in the Merkle tree that this proof is for.
- `Data ([]byte)`: Encoded Merkle proof for the key.
### Snapshot
- **Fields**:
- `Height (uint64)`: The height at which the snapshot was taken (after commit).
- `Format (uint32)`: An application-specific snapshot format, allowing applications to version
- `Height (uint64)`: The height at which the snapshot was taken (after commit).
- `Format (uint32)`: An application-specific snapshot format, allowing applications to version
their snapshot data format and make backwards-incompatible changes. Tendermint does not
interpret this.
- `Chunks (uint32)`: The number of chunks in the snapshot. Must be at least 1 (even if empty).
- `Hash (bytes)`: An arbitrary snapshot hash. Must be equal only for identical snapshots across
- `Chunks (uint32)`: The number of chunks in the snapshot. Must be at least 1 (even if empty).
- `Hash (bytes)`: An arbitrary snapshot hash. Must be equal only for identical snapshots across
nodes. Tendermint does not interpret the hash, it only compares them.
- `Metadata (bytes)`: Arbitrary application metadata, for example chunk hashes or other
- `Metadata (bytes)`: Arbitrary application metadata, for example chunk hashes or other
verification data.
- **Usage**:
- Used for state sync snapshots, see [separate section](apps.md#state-sync) for details.
- A snapshot is considered identical across nodes only if _all_ fields are equal (including
- Used for state sync snapshots, see [separate section](apps.md#state-sync) for details.
- A snapshot is considered identical across nodes only if _all_ fields are equal (including
`Metadata`). Chunks may be retrieved from all nodes that have the same snapshot.
- When sent across the network, a snapshot message can be at most 4 MB.
- When sent across the network, a snapshot message can be at most 4 MB.

+ 24
- 24
spec/abci/apps.md View File

@ -203,7 +203,7 @@ blockchain.
Updates to the Tendermint validator set can be made by returning
`ValidatorUpdate` objects in the `ResponseEndBlock`:
```
```proto
message ValidatorUpdate {
PubKey pub_key
int64 power
@ -226,9 +226,9 @@ following rules:
- if power is 0, the validator must already exist, and will be removed from the
validator set
- if power is non-0:
- if the validator does not already exist, it will be added to the validator
- if the validator does not already exist, it will be added to the validator
set with the given power
- if the validator does already exist, its power will be adjusted to the given power
- if the validator does already exist, its power will be adjusted to the given power
- the total power of the new validator set must not exceed MaxTotalVotingPower
Note the updates returned in block `H` will only take effect at block `H+2`.
@ -293,14 +293,14 @@ Must have `MaxAgeNumBlocks > 0`.
This is the maximum number of evidence that can be committed to a single block.
The product of this and the `MaxEvidenceBytes` must not exceed the size of
The product of this and the `MaxEvidenceBytes` must not exceed the size of
a block minus it's overhead ( ~ `MaxBytes`).
The amount must be a positive number.
### EvidenceParams.ProofTrialPeriod
This is the duration in terms of blocks that an indicted validator has to prove a
This is the duration in terms of blocks that an indicted validator has to prove a
correct lock change in the event of amnesia evidence when a validator voted more
than once across different rounds.
@ -381,7 +381,7 @@ Some applications (eg. Ethereum, Cosmos-SDK) have multiple "levels" of Merkle tr
where the leaves of one tree are the root hashes of others. To support this, and
the general variability in Merkle proofs, the `ResponseQuery.Proof` has some minimal structure:
```
```proto
message Proof {
repeated ProofOp ops
}
@ -437,7 +437,7 @@ failed during the Commit of block H, then `last_block_height = H-1` and
We now distinguish three heights, and describe how Tendermint syncs itself with
the app.
```
```md
storeBlockHeight = height of the last block Tendermint saw a commit for
stateBlockHeight = height of the last block for which Tendermint completed all
block processing and saved all ABCI results to disk
@ -497,8 +497,8 @@ State sync is an alternative mechanism for bootstrapping a new node, where it fe
of the state machine at a given height and restores it. Depending on the application, this can
be several orders of magnitude faster than replaying blocks.
Note that state sync does not currently backfill historical blocks, so the node will have a
truncated block history - users are advised to consider the broader network implications of this in
Note that state sync does not currently backfill historical blocks, so the node will have a
truncated block history - users are advised to consider the broader network implications of this in
terms of block availability and auditability. This functionality may be added in the future.
For details on the specific ABCI calls and types, see the [methods and types section](abci.md).
@ -509,20 +509,20 @@ Applications that want to support state syncing must take state snapshots at reg
this is accomplished is entirely up to the application. A snapshot consists of some metadata and
a set of binary chunks in an arbitrary format:
* `Height (uint64)`: The height at which the snapshot is taken. It must be taken after the given
- `Height (uint64)`: The height at which the snapshot is taken. It must be taken after the given
height has been committed, and must not contain data from any later heights.
* `Format (uint32)`: An arbitrary snapshot format identifier. This can be used to version snapshot
formats, e.g. to switch from Protobuf to MessagePack for serialization. The application can use
- `Format (uint32)`: An arbitrary snapshot format identifier. This can be used to version snapshot
formats, e.g. to switch from Protobuf to MessagePack for serialization. The application can use
this when restoring to choose whether to accept or reject a snapshot.
* `Chunks (uint32)`: The number of chunks in the snapshot. Each chunk contains arbitrary binary
- `Chunks (uint32)`: The number of chunks in the snapshot. Each chunk contains arbitrary binary
data, and should be less than 16 MB; 10 MB is a good starting point.
* `Hash ([]byte)`: An arbitrary hash of the snapshot. This is used to check whether a snapshot is
- `Hash ([]byte)`: An arbitrary hash of the snapshot. This is used to check whether a snapshot is
the same across nodes when downloading chunks.
* `Metadata ([]byte)`: Arbitrary snapshot metadata, e.g. chunk hashes for verification or any other
- `Metadata ([]byte)`: Arbitrary snapshot metadata, e.g. chunk hashes for verification or any other
necessary info.
For a snapshot to be considered the same across nodes, all of these fields must be identical. When
@ -533,15 +533,15 @@ application via the ABCI `ListSnapshots` method to discover available snapshots,
snapshot chunks via `LoadSnapshotChunk`. The application is free to choose how to implement this
and which formats to use, but should provide the following guarantees:
* **Consistent:** A snapshot should be taken at a single isolated height, unaffected by
concurrent writes. This can e.g. be accomplished by using a data store that supports ACID
- **Consistent:** A snapshot should be taken at a single isolated height, unaffected by
concurrent writes. This can e.g. be accomplished by using a data store that supports ACID
transactions with snapshot isolation.
* **Asynchronous:** Taking a snapshot can be time-consuming, so it should not halt chain progress,
- **Asynchronous:** Taking a snapshot can be time-consuming, so it should not halt chain progress,
for example by running in a separate thread.
* **Deterministic:** A snapshot taken at the same height in the same format should be identical
(at the byte level) across nodes, including all metadata. This ensures good availability of
- **Deterministic:** A snapshot taken at the same height in the same format should be identical
(at the byte level) across nodes, including all metadata. This ensures good availability of
chunks, and that they fit together across nodes.
A very basic approach might be to use a datastore with MVCC transactions (such as RocksDB),
@ -583,7 +583,7 @@ the application aborts.
#### Snapshot Restoration
Once a snapshot has been accepted via `OfferSnapshot`, Tendermint begins downloading chunks from
any peers that have the same snapshot (i.e. that have identical metadata fields). Chunks are
any peers that have the same snapshot (i.e. that have identical metadata fields). Chunks are
spooled in a temporary directory, and then given to the application in sequential order via
`ApplySnapshotChunk` until all chunks have been accepted.
@ -603,7 +603,7 @@ restarting restoration, or simply abort with an error.
#### Snapshot Verification
Once all chunks have been accepted, Tendermint issues an `Info` ABCI call to retrieve the
`LastBlockAppHash`. This is compared with the trusted app hash from the chain, retrieved and
`LastBlockAppHash`. This is compared with the trusted app hash from the chain, retrieved and
verified using the light client. Tendermint also checks that `LastBlockHeight` corresponds to the
height of the snapshot.
@ -623,8 +623,8 @@ P2P configuration options to whitelist a set of trusted peers that can provide v
#### Transition to Consensus
Once the snapshot has been restored, Tendermint gathers additional information necessary for
bootstrapping the node (e.g. chain ID, consensus parameters, validator sets, and block headers)
from the genesis file and light client RPC servers. It also fetches and records the `AppVersion`
bootstrapping the node (e.g. chain ID, consensus parameters, validator sets, and block headers)
from the genesis file and light client RPC servers. It also fetches and records the `AppVersion`
from the ABCI application.
Once the node is bootstrapped with this information and the restored state machine, it transitions


+ 28
- 29
spec/consensus/bft-time.md View File

@ -1,54 +1,53 @@
# BFT Time
Tendermint provides a deterministic, Byzantine fault-tolerant, source of time.
Time in Tendermint is defined with the Time field of the block header.
Tendermint provides a deterministic, Byzantine fault-tolerant, source of time.
Time in Tendermint is defined with the Time field of the block header.
It satisfies the following properties:
- Time Monotonicity: Time is monotonically increasing, i.e., given
a header H1 for height h1 and a header H2 for height `h2 = h1 + 1`, `H1.Time < H2.Time`.
- Time Validity: Given a set of Commit votes that forms the `block.LastCommit` field, a range of
- Time Monotonicity: Time is monotonically increasing, i.e., given
a header H1 for height h1 and a header H2 for height `h2 = h1 + 1`, `H1.Time < H2.Time`.
- Time Validity: Given a set of Commit votes that forms the `block.LastCommit` field, a range of
valid values for the Time field of the block header is defined only by
Precommit messages (from the LastCommit field) sent by correct processes, i.e.,
Precommit messages (from the LastCommit field) sent by correct processes, i.e.,
a faulty process cannot arbitrarily increase the Time value.
In the context of Tendermint, time is of type int64 and denotes UNIX time in milliseconds, i.e.,
corresponds to the number of milliseconds since January 1, 1970. Before defining rules that need to be enforced by the
In the context of Tendermint, time is of type int64 and denotes UNIX time in milliseconds, i.e.,
corresponds to the number of milliseconds since January 1, 1970. Before defining rules that need to be enforced by the
Tendermint consensus protocol, so the properties above holds, we introduce the following definition:
- median of a Commit is equal to the median of `Vote.Time` fields of the `Vote` messages,
where the value of `Vote.Time` is counted number of times proportional to the process voting power. As in Tendermint
the voting power is not uniform (one process one vote), a vote message is actually an aggregator of the same votes whose
the voting power is not uniform (one process one vote), a vote message is actually an aggregator of the same votes whose
number is equal to the voting power of the process that has casted the corresponding votes message.
Let's consider the following example:
- we have four processes p1, p2, p3 and p4, with the following voting power distribution (p1, 23), (p2, 27), (p3, 10)
and (p4, 10). The total voting power is 70 (`N = 3f+1`, where `N` is the total voting power, and `f` is the maximum voting
power of the faulty processes), so we assume that the faulty processes have at most 23 of voting power.
Furthermore, we have the following vote messages in some LastCommit field (we ignore all fields except Time field):
- (p1, 100), (p2, 98), (p3, 1000), (p4, 500). We assume that p3 and p4 are faulty processes. Let's assume that the
`block.LastCommit` message contains votes of processes p2, p3 and p4. Median is then chosen the following way:
the value 98 is counted 27 times, the value 1000 is counted 10 times and the value 500 is counted also 10 times.
So the median value will be the value 98. No matter what set of messages with at least `2f+1` voting power we
choose, the median value will always be between the values sent by correct processes.
We ensure Time Monotonicity and Time Validity properties by the following rules:
- we have four processes p1, p2, p3 and p4, with the following voting power distribution (p1, 23), (p2, 27), (p3, 10)
and (p4, 10). The total voting power is 70 (`N = 3f+1`, where `N` is the total voting power, and `f` is the maximum voting
power of the faulty processes), so we assume that the faulty processes have at most 23 of voting power.
Furthermore, we have the following vote messages in some LastCommit field (we ignore all fields except Time field):
- (p1, 100), (p2, 98), (p3, 1000), (p4, 500). We assume that p3 and p4 are faulty processes. Let's assume that the
`block.LastCommit` message contains votes of processes p2, p3 and p4. Median is then chosen the following way:
the value 98 is counted 27 times, the value 1000 is counted 10 times and the value 500 is counted also 10 times.
So the median value will be the value 98. No matter what set of messages with at least `2f+1` voting power we
choose, the median value will always be between the values sent by correct processes.
We ensure Time Monotonicity and Time Validity properties by the following rules:
- let rs denotes `RoundState` (consensus internal state) of some process. Then
- let rs denotes `RoundState` (consensus internal state) of some process. Then
`rs.ProposalBlock.Header.Time == median(rs.LastCommit) &&
rs.Proposal.Timestamp == rs.ProposalBlock.Header.Time`.
- Furthermore, when creating the `vote` message, the following rules for determining `vote.Time` field should hold:
- Furthermore, when creating the `vote` message, the following rules for determining `vote.Time` field should hold:
- if `rs.LockedBlock` is defined then
`vote.Time = max(rs.LockedBlock.Timestamp + config.BlockTimeIota, time.Now())`, where `time.Now()`
`vote.Time = max(rs.LockedBlock.Timestamp + config.BlockTimeIota, time.Now())`, where `time.Now()`
denotes local Unix time in milliseconds, and `config.BlockTimeIota` is a configuration parameter that corresponds
to the minimum timestamp increment of the next block.
- else if `rs.Proposal` is defined then
`vote.Time = max(rs.Proposal.Timestamp + config.BlockTimeIota, time.Now())`,
- otherwise, `vote.Time = time.Now())`. In this case vote is for `nil` so it is not taken into account for
the timestamp of the next block.
- else if `rs.Proposal` is defined then
`vote.Time = max(rs.Proposal.Timestamp + config.BlockTimeIota, time.Now())`,
- otherwise, `vote.Time = time.Now())`. In this case vote is for `nil` so it is not taken into account for
the timestamp of the next block.

+ 3
- 4
spec/consensus/consensus-paper/README.md View File

@ -7,12 +7,12 @@ consensus protocol.
MacTex is Latex distribution for Mac OS. You can download it [here](http://www.tug.org/mactex/mactex-download.html).
Popular IDE for Latex-based projects is TexStudio. It can be downloaded
Popular IDE for Latex-based projects is TexStudio. It can be downloaded
[here](https://www.texstudio.org/).
## How to build project
## How to build project
In order to compile the latex files (and write bibliography), execute
In order to compile the latex files (and write bibliography), execute
`$ pdflatex paper` <br/>
`$ bibtex paper` <br/>
@ -22,4 +22,3 @@ In order to compile the latex files (and write bibliography), execute
The generated file is paper.pdf. You can open it with
`$ open paper.pdf`

+ 2
- 2
spec/consensus/consensus.md View File

@ -32,7 +32,7 @@ determine the next block. Each round is composed of three _steps_
In the optimal scenario, the order of steps is:
```
```md
NewHeight -> (Propose -> Prevote -> Precommit)+ -> Commit -> NewHeight ->...
```
@ -59,7 +59,7 @@ parameters over each successive round.
## State Machine Diagram
```
```md
+-------------------------------------+
v |(Wait til `CommmitTime+timeoutCommit`)
+-----------+ +-----+-----+


+ 10
- 10
spec/consensus/creating-proposal.md View File

@ -16,11 +16,11 @@ we account for amino overhead for each transaction.
```go
func MaxDataBytes(maxBytes int64, valsCount, evidenceCount int) int64 {
return maxBytes -
MaxAminoOverheadForBlock -
MaxHeaderBytes -
int64(valsCount)*MaxVoteBytes -
int64(evidenceCount)*MaxEvidenceBytes
return maxBytes -
MaxAminoOverheadForBlock -
MaxHeaderBytes -
int64(valsCount)*MaxVoteBytes -
int64(evidenceCount)*MaxEvidenceBytes
}
```
@ -33,10 +33,10 @@ maximum evidence size (1/10th of the maximum block size).
```go
func MaxDataBytesUnknownEvidence(maxBytes int64, valsCount int) int64 {
return maxBytes -
MaxAminoOverheadForBlock -
MaxHeaderBytes -
int64(valsCount)*MaxVoteBytes -
MaxEvidenceBytesPerBlock(maxBytes)
return maxBytes -
MaxAminoOverheadForBlock -
MaxHeaderBytes -
int64(valsCount)*MaxVoteBytes -
MaxEvidenceBytesPerBlock(maxBytes)
}
```

+ 4
- 5
spec/consensus/light-client/README.md View File

@ -1,7 +1,7 @@
# Tendermint Light Client Protocol
NOTE: This specification is under heavy development and is not yet complete nor
accurate.
accurate.
## Contents
@ -51,16 +51,15 @@ full nodes.
### Synchrony
Light clients are fundamentally synchronous protocols,
Light clients are fundamentally synchronous protocols,
where security is restricted by the interval during which a validator can be punished
for Byzantine behaviour. We assume here that such intervals have fixed and known minimal duration
referred to commonly as a blockchain's Unbonding Period.
A secure light client must guarantee that all three components -
core verification, fork detection, and fork accountability -
A secure light client must guarantee that all three components -
core verification, fork detection, and fork accountability -
each with their own synchrony assumptions and fault model, can execute
sequentially and to completion within the given Unbonding Period.
TODO: define all the synchrony parameters used in the protocol and their
relation to the Unbonding Period.

+ 24
- 47
spec/consensus/light-client/accountability.md View File

@ -1,8 +1,9 @@
# Fork accountability
# Fork accountability
## Problem Statement
Tendermint consensus guarantees the following specifications for all heights:
* agreement -- no two correct full nodes decide differently.
* validity -- the decided block satisfies the predefined predicate *valid()*.
* termination -- all correct full nodes eventually decide,
@ -13,7 +14,6 @@ does not hold, each of the specification may be violated.
The agreement property says that for a given height, any two correct validators that decide on a block for that height decide on the same block. That the block was indeed generated by the blockchain, can be verified starting from a trusted (genesis) block, and checking that all subsequent blocks are properly signed.
However, faulty nodes may forge blocks and try to convince users (light clients) that the blocks had been correctly generated. In addition, Tendermint agreement might be violated in the case where more than 1/3 of the voting power belongs to faulty validators: Two correct validators decide on different blocks. The latter case motivates the term "fork": as Tendermint consensus also agrees on the next validator set, correct validators may have decided on disjoint next validator sets, and the chain branches into two or more partitions (possibly having faulty validators in common) and each branch continues to generate blocks independently of the other.
We say that a fork is a case in which there are two commits for different blocks at the same height of the blockchain. The proplem is to ensure that in those cases we are able to detect faulty validators (and not mistakenly accuse correct validators), and incentivize therefore validators to behave according to the protocol specification.
@ -24,7 +24,6 @@ We say that a fork is a case in which there are two commits for different blocks
*Remark.* In the case more than 1/3 of the voting power belongs to faulty validators, also validity and termination can be broken. Termination can be broken if faulty processes just do not send the messages that are needed to make progress. Due to asynchrony, this is not punishable, because faulty validators can always claim they never received the messages that would have forced them to send messages.
## The Misbehavior of Faulty Validators
Forks are the result of faulty validators deviating from the protocol. In principle several such deviations can be detected without a fork actually occurring:
@ -37,13 +36,11 @@ Forks are the result of faulty validators deviating from the protocol. In princi
*Remark.* In isolation, Point 3 is an attack on validity (rather than agreement). However, the prevotes and precommits can then also be used to forge blocks.
1. amnesia: Tendermint consensus has a locking mechanism. If a validator has some value v locked, then it can only prevote/precommit for v or nil. Sending prevote/precomit message for a different value v' (that is not nil) while holding lock on value v is misbehavior.
1. amnesia: Tendermint consensus has a locking mechanism. If a validator has some value v locked, then it can only prevote/precommit for v or nil. Sending prevote/precomit message for a different value v' (that is not nil) while holding lock on value v is misbehavior.
2. spurious messages: In Tendermint consensus most of the message send instructions are guarded by threshold guards, e.g., one needs to receive *2f + 1* prevote messages to send precommit. Faulty validators may send precommit without having received the prevote messages.
Independently of a fork happening, punishing this behavior might be important to prevent forks altogether. This should keep attackers from misbehaving: if at most 1/3 of the voting power is faulty, this misbehavior is detectable but will not lead to a safety violation. Thus, unless they have more than 1/3 (or in some cases more than 2/3) of the voting power attackers have the incentive to not misbehave. If attackers control too much voting power, we have to deal with forks, as discussed in this document.
Independently of a fork happening, punishing this behavior might be important to prevent forks altogether. This should keep attackers from misbehaving: if at most 1/3 of the voting power is faulty, this misbehavior is detectable but will not lead to a safety violation. Thus, unless they have more than 1/3 (or in some cases more than 2/3) of the voting power attackers have the incentive to not misbehave. If attackers control too much voting power, we have to deal with forks, as discussed in this document.
## Two types of forks
@ -53,7 +50,6 @@ As in this case we have two different blocks (both having the same right/no righ
* Fork-Light. All correct validators decide on the same block for height *h*, but faulty processes (validators or not), forge a different block for that height, in order to fool users (who use the light client).
# Attack scenarios
## On-chain attacks
@ -64,25 +60,17 @@ There are several scenarios in which forks might happen. The first is double sig
* F1. Equivocation: faulty validators sign multiple vote messages (prevote and/or precommit) for different values *during the same round r* at a given height h.
### Flip-flopping
Tendermint consensus implements a locking mechanism: If a correct validator *p* receives proposal for value v and *2f + 1* prevotes for a value *id(v)* in round *r*, it locks *v* and remembers *r*. In this case, *p* also sends a precommit message for *id(v)*, which later may serve as proof that *p* locked *v*.
In subsequent rounds, *p* only sends prevote messages for a value it had previously locked. However, it is possible to change the locked value if in a future round *r' > r*, if the process receives proposal and *2f + 1* prevotes for a different value *v'*. In this case, *p* could send a prevote/precommit for *id(v')*. This algorithmic feature can be exploited in two ways:
* F2. Faulty Flip-flopping (Amnesia): faulty validators precommit some value *id(v)* in round *r* (value *v* is locked in round *r*) and then prevote for different value *id(v')* in higher round *r' > r* without previously correctly unlocking value *v*. In this case faulty processes "forget" that they have locked value *v* and prevote some other value in the following rounds.
Some correct validators might have decided on *v* in *r*, and other correct validators decide on *v'* in *r'*. Here we can have branching on the main chain (Fork-Full).
* F3. Correct Flip-flopping (Back to the past): There are some precommit messages signed by (correct) validators for value *id(v)* in round *r*. Still, *v* is not decided upon, and all processes move on to the next round. Then correct validators (correctly) lock and decide a different value *v'* in some round *r' > r*. And the correct validators continue; there is no branching on the main chain.
However, faulty validators may use the correct precommit messages from round *r* together with a posteriori generated faulty precommit messages for round *r* to forge a block for a value that was not decided on the main chain (Fork-Light).
## Off-chain attacks
F1-F3 may contaminate the state of full nodes (and even validators). Contaminated (but otherwise correct) full nodes may thus communicate faulty blocks to light clients.
@ -96,10 +84,9 @@ Similarly, without actually interfering with the main chain, we can have the fol
We consider three types of potential attack victims:
- FN: full node
- LCS: light client with sequential header verification
- LCB: light client with bisection based header verification
* FN: full node
* LCS: light client with sequential header verification
* LCB: light client with bisection based header verification
F1 and F2 can be used by faulty validators to actually create multiple branches on the blockchain. That means that correctly operating full nodes decide on different blocks for the same height. Until a fork is detected locally by a full node (by receiving evidence from others or by some other local check that fails), the full node can spread corrupted blocks to light clients.
@ -110,15 +97,9 @@ F3 is similar to F1, except that no two correct validators decide on different b
In addition, without creating a fork on the main chain, light clients can be contaminated by more than a third of validators that are faulty and sign a forged header
F4 cannot fool correct full nodes as they know the current validator set. Similarly, LCS know who the validators are. Hence, F4 is an attack against LCB that do not necessarily know the complete prefix of headers (Fork-Light), as they trust a header that is signed by at least one correct validator (trusting period method).
The following table gives an overview of how the different attacks may affect different nodes. F1-F3 are *on-chain* attacks so they can corrupt the state of full nodes. Then if a light client (LCS or LCB) contacts a full node to obtain headers (or blocks), the corrupted state may propagate to the light client.
F4 and F5 are *off-chain*, that is, these attacks cannot be used to corrupt the state of full nodes (which have sufficient knowledge on the state of the chain to not be fooled).
F4 and F5 are *off-chain*, that is, these attacks cannot be used to corrupt the state of full nodes (which have sufficient knowledge on the state of the chain to not be fooled).
| Attack | FN | LCS | LCB |
|:------:|:------:|:------:|:------:|
@ -128,16 +109,11 @@ F4 and F5 are *off-chain*, that is, these attacks cannot be used to corrupt the
| F4 | | | direct |
| F5 | | | direct |
**Q:** Light clients are more vulnerable than full nodes, because the former do only verify headers but do not execute transactions. What kind of certainty is gained by a full node that executes a transaction?
As a full node verifies all transactions, it can only be
contaminated by an attack if the blockchain itself violates its invariant (one block per height), that is, in case of a fork that leads to branching.
## Detailed Attack Scenarios
### Equivocation based attacks
@ -148,6 +124,7 @@ round of some height. This attack can be executed on both full nodes and light c
#### Scenario 1: Equivocation on the main chain
Validators:
* CA - a set of correct validators with less than 1/3 of the voting power
* CB - a set of correct validators with less than 1/3 of the voting power
* CA and CB are disjoint
@ -162,14 +139,15 @@ Execution:
* Validators from the set CA and CB prevote for A and B, respectively.
* Faulty validators from the set F prevote both for A and B.
* The faulty prevote messages
- for A arrive at CA long before the B messages
- for B arrive at CB long before the A messages
* for A arrive at CA long before the B messages
* for B arrive at CB long before the A messages
* Therefore correct validators from set CA and CB will observe
more than 2/3 of prevotes for A and B and precommit for A and B, respectively.
* Faulty validators from the set F precommit both values A and B.
* Thus, we have more than 2/3 commits for both A and B.
Consequences:
* Creating evidence of misbehavior is simple in this case as we have multiple messages signed by the same faulty processes for different values in the same round.
* We have to ensure that these different messages reach a correct process (full node, monitor?), which can submit evidence.
@ -180,11 +158,12 @@ Consequences:
#### Scenario 2: Equivocation to a light client (LCS)
Validators:
* a set F of faulty validators with more than 2/3 of the voting power.
Execution:
* for the main chain F behaves nicely
* F coordinates to sign a block B that is different from the one on the main chain.
* the light clients obtains B and trusts at as it is signed by more than 2/3 of the voting power.
@ -202,8 +181,6 @@ In order to detect such (equivocation-based attack), the light client would need
### Flip-flopping: Amnesia based attacks
In case of amnesia, faulty validators lock some value *v* in some round *r*, and then vote for different value *v'* in higher rounds without correctly unlocking value *v*. This attack can be used both on full nodes and light clients.
#### Scenario 3: At most 2/3 of faults
@ -215,7 +192,7 @@ Validators:
Execution:
* Faulty validators commit (without exposing it on the main chain) a block A in round *r* by collecting more than 2/3 of the
* Faulty validators commit (without exposing it on the main chain) a block A in round *r* by collecting more than 2/3 of the
voting power (containing correct and faulty validators).
* All validators (correct and faulty) reach a round *r' > r*.
* Some correct validators in C do not lock any value before round *r'*.
@ -224,7 +201,7 @@ Execution:
*Remark.* In this case, the more than 1/3 of faulty validators do not need to commit an equivocation (F1) as they only vote once per round in the execution.
Detecting faulty validators in the case of such an attack can be done by the fork accountability mechanism described in: https://docs.google.com/document/d/11ZhMsCj3y7zIZz4udO9l25xqb0kl7gmWqNpGVRzOeyY/edit?usp=sharing.
Detecting faulty validators in the case of such an attack can be done by the fork accountability mechanism described in: <https://docs.google.com/document/d/11ZhMsCj3y7zIZz4udO9l25xqb0kl7gmWqNpGVRzOeyY/edit?usp=sharing>.
If a light client is attacked using this attack with more than 1/3 of voting power (and less than 2/3), the attacker cannot change the application state arbitrarily. Rather, the attacker is limited to a state a correct validator finds acceptable: In the execution above, correct validators still find the value acceptable, however, the block the light client trusts deviates from the one on the main chain.
@ -249,7 +226,7 @@ Consequences:
* The validators in F1 will be detectable by the the fork accountability mechanisms.
* The validators in F2 cannot be detected using this mechanism.
Only in case they signed something which conflicts with the application this can be used against them. Otherwise they do not do anything incorrect.
* This case is not covered by the report https://docs.google.com/document/d/11ZhMsCj3y7zIZz4udO9l25xqb0kl7gmWqNpGVRzOeyY/edit?usp=sharing as it only assumes at most 2/3 of faulty validators.
* This case is not covered by the report <https://docs.google.com/document/d/11ZhMsCj3y7zIZz4udO9l25xqb0kl7gmWqNpGVRzOeyY/edit?usp=sharing> as it only assumes at most 2/3 of faulty validators.
**Q:** do we need to define a special kind of attack for the case where a validator sign arbitrarily state? It seems that detecting such attack requires a different mechanism that would require as an evidence a sequence of blocks that led to that state. This might be very tricky to implement.
@ -257,9 +234,10 @@ Only in case they signed something which conflicts with the application this can
In this kind of attack, faulty validators take advantage of the fact that they did not sign messages in some of the past rounds. Due to the asynchronous network in which Tendermint operates, we cannot easily differentiate between such an attack and delayed message. This kind of attack can be used at both full nodes and light clients.
#### Scenario 5:
#### Scenario 5
Validators:
* C1 - a set of correct validators with 1/3 of the voting power
* C2 - a set of correct validators with 1/3 of the voting power
* C1 and C2 are disjoint
@ -267,7 +245,6 @@ Validators:
* one additional faulty process *q*
* F and *q* violate the Tendermint failure model.
Execution:
* in a round *r* of height *h* we have C1 precommitting a value A,
@ -278,7 +255,6 @@ Execution:
* F and *fp* "go back to the past" and sign precommit message for value A in round *r*.
* Together with precomit messages of C1 this is sufficient for a commit for value A.
Consequences:
* Only a single faulty validator that previously precommited nil did equivocation, while the other 1/3 of faulty validators actually executed an attack that has exactly the same sequence of messages as part of amnesia attack. Detecting this kind of attack boil down to mechanisms for equivocation and amnesia.
@ -289,16 +265,17 @@ Consequences:
In case of phantom validators, processes that are not part of the current validator set but are still bonded (as attack happen during their unbonding period) can be part of the attack by signing vote messages. This attack can be executed against both full nodes and light clients.
#### Scenario 6:
#### Scenario 6
Validators:
* F -- a set of faulty validators that are not part of the validator set on the main chain at height *h + k*
Execution:
* There is a fork, and there exist two different headers for height *h + k*, with different validator sets:
- VS2 on the main chain
- forged header VS2', signed by F (and others)
* VS2 on the main chain
* forged header VS2', signed by F (and others)
* a light client has a trust in a header for height *h* (and the corresponding validator set VS1).
* As part of bisection header verification, it verifies the header at height *h + k* with new validator set VS2'.
@ -314,7 +291,7 @@ Consequences:
the light client involving a phantom validator will have needed to be initiated by 1/3+ lunatic
validators that can forge a new validator set that includes the phantom validator. Only in
that case will the light client accept the phantom validators vote. We need only worry about
punishing the 1/3+ lunatic cabal, that is the root cause of the attack.
punishing the 1/3+ lunatic cabal, that is the root cause of the attack.
### Lunatic validator


+ 21
- 29
spec/consensus/light-client/verification.md View File

@ -68,7 +68,6 @@ get trust for `hp`, and `hp` can be used to get trust for `snh`. If this is the
if not, we continue recursively until either we found set of headers that can build (transitively) trust relation
between `h` and `h1`, or we failed as two consecutive headers don't verify against each other.
## Definitions
### Data structures
@ -110,6 +109,7 @@ In the following, only the details of the data structures needed for this specif
For the purpose of this light client specification, we assume that the Tendermint Full Node
exposes the following functions over Tendermint RPC:
```go
// returns signed header: Header with Commit, for the given height
func Commit(height int64) (SignedHeader, error)
@ -119,6 +119,7 @@ exposes the following functions over Tendermint RPC:
```
Furthermore, we assume the following auxiliary functions:
```go
// returns true if the commit is for the header, ie. if it contains
// the correct hash of the header; otherwise false
@ -137,8 +138,6 @@ Furthermore, we assume the following auxiliary functions:
func hash(v2 ValidatorSet) []byte
```
### Functions
In the functions below we will be using `trustThreshold` as a parameter. For simplicity
we assume that `trustThreshold` is a float between `1/3` and `2/3` and we will not be checking it
in the pseudo-code.
@ -399,15 +398,13 @@ func fatalError(err) bool {
}
```
### The case `untrustedHeader.Height < trustedHeader.Height`
In the use case where someone tells the light client that application data that is relevant for it
can be read in the block of height `k` and the light client trusts a more recent header, we can use the
In the use case where someone tells the light client that application data that is relevant for it
can be read in the block of height `k` and the light client trusts a more recent header, we can use the
hashes to verify headers "down the chain." That is, we iterate down the heights and check the hashes in each step.
*Remark.* For the case were the light client trusts two headers `i` and `j` with `i < k < j`, we should
*Remark.* For the case were the light client trusts two headers `i` and `j` with `i < k < j`, we should
discuss/experiment whether the forward or the backward method is more effective.
```go
@ -449,31 +446,30 @@ func VerifyHeaderBackwards(trustedHeader Header,
}
```
*Assumption*: In the following, we assume that *untrusted_h.Header.height > trusted_h.Header.height*. We will quickly discuss the other case in the next section.
We consider the following set-up:
- the light client communicates with one full node
- the light client locally stores all the headers that has passed basic verification and that are within light client trust period. In the pseudo code below we
write *Store.Add(header)* for this. If a header failed to verify, then
the full node we are talking to is faulty and we should disconnect from it and reinitialise with new peer.
- If `CanTrust` returns *error*, then the light client has seen a forged header or the trusted header has expired (it is outside its trusted period).
* In case of forged header, the full node is faulty so light client should disconnect and reinitialise with new peer. If the trusted header has expired,
- In case of forged header, the full node is faulty so light client should disconnect and reinitialise with new peer. If the trusted header has expired,
we need to reinitialise light client with new trusted header (that is within its trusted period), but we don't necessarily need to disconnect from the full node
we are talking to (as we haven't observed full node misbehavior in this case).
## Correctness of the Light Client Protocols
### Definitions
* `TRUSTED_PERIOD`: trusted period
* for realtime `t`, the predicate `correct(v,t)` is true if the validator `v`
- `TRUSTED_PERIOD`: trusted period
- for realtime `t`, the predicate `correct(v,t)` is true if the validator `v`
follows the protocol until time `t` (we will see about recovery later).
* Validator fields. We will write a validator as a tuple `(v,p)` such that
+ `v` is the identifier (i.e., validator address; we assume identifiers are unique in each validator set)
+ `p` is its voting power
* For each header `h`, we write `trust(h) = true` if the light client trusts `h`.
- Validator fields. We will write a validator as a tuple `(v,p)` such that
- `v` is the identifier (i.e., validator address; we assume identifiers are unique in each validator set)
- `p` is its voting power
- For each header `h`, we write `trust(h) = true` if the light client trusts `h`.
### Failure Model
@ -487,7 +483,6 @@ Formally,
2/3 \sum_{(v,p) \in validators(h.NextValidatorsHash)} p
\]
The light client communicates with a full node and learns new headers. The goal is to locally decide whether to trust a header. Our implementation needs to ensure the following two properties:
- *Light Client Completeness*: If a header `h` was correctly generated by an instance of Tendermint consensus (and its age is less than the trusted period),
@ -532,14 +527,15 @@ is correct, but we only trust the fact that less than `1/3` of them are faulty (
*`VerifySingle` correctness arguments*
Light Client Accuracy:
- Assume by contradiction that `untrustedHeader` was not generated correctly and the light client sets trust to true because `verifySingle` returns without error.
- `trustedState` is trusted and sufficiently new
- by the Failure Model, less than `1/3` of the voting power held by faulty validators => at least one correct validator `v` has signed `untrustedHeader`.
- as `v` is correct up to now, it followed the Tendermint consensus protocol at least up to signing `untrustedHeader` => `untrustedHeader` was correctly generated.
We arrive at the required contradiction.
Light Client Completeness:
- The check is successful if sufficiently many validators of `trustedState` are still validators in the height `untrustedHeader.Height` and signed `untrustedHeader`.
- If `untrustedHeader.Height = trustedHeader.Height + 1`, and both headers were generated correctly, the test passes.
@ -550,10 +546,10 @@ Light Client Completeness:
However, in case of (frequent) changes in the validator set, the higher the `trustThreshold` is chosen, the more unlikely it becomes that
`verifySingle` returns with an error for non-adjacent headers.
* `VerifyBisection` correctness arguments (sketch)*
- `VerifyBisection` correctness arguments (sketch)*
Light Client Accuracy:
- Assume by contradiction that the header at `untrustedHeight` obtained from the full node was not generated correctly and
the light client sets trust to true because `VerifyBisection` returns without an error.
- `VerifyBisection` returns without error only if all calls to `verifySingle` in the recursion return without error (return `nil`).
@ -568,12 +564,8 @@ This is only ensured if upon `Commit(pivot)` the light client is always provided
With `VerifyBisection`, a faulty full node could stall a light client by creating a long sequence of headers that are queried one-by-one by the light client and look OK,
before the light client eventually detects a problem. There are several ways to address this:
* Each call to `Commit` could be issued to a different full node
* Instead of querying header by header, the light client tells a full node which header it trusts, and the height of the header it needs. The full node responds with
the header along with a proof consisting of intermediate headers that the light client can use to verify. Roughly, `VerifyBisection` would then be executed at the full node.
* We may set a timeout how long `VerifyBisection` may take.
- Each call to `Commit` could be issued to a different full node
- Instead of querying header by header, the light client tells a full node which header it trusts, and the height of the header it needs. The full node responds with
the header along with a proof consisting of intermediate headers that the light client can use to verify. Roughly, `VerifyBisection` would then be executed at the full node.
- We may set a timeout how long `VerifyBisection` may take.

+ 1
- 1
spec/consensus/readme.md View File

@ -4,7 +4,7 @@ cards: true
# Consensus
Specification of the Tendermint consensus protocol.
Specification of the Tendermint consensus protocol.
## Contents


+ 30
- 30
spec/consensus/signing.md View File

@ -14,12 +14,12 @@ being signed. It is defined in Go as follows:
type SignedMsgType byte
const (
// Votes
PrevoteType SignedMsgType = 0x01
PrecommitType SignedMsgType = 0x02
// Votes
PrevoteType SignedMsgType = 0x01
PrecommitType SignedMsgType = 0x02
// Proposals
ProposalType SignedMsgType = 0x20
// Proposals
ProposalType SignedMsgType = 0x20
)
```
@ -48,13 +48,13 @@ BlockID is the structure used to represent the block:
```go
type BlockID struct {
Hash []byte
PartsHeader PartSetHeader
Hash []byte
PartsHeader PartSetHeader
}
type PartSetHeader struct {
Hash []byte
Total int
Hash []byte
Total int
}
```
@ -64,7 +64,7 @@ We introduce two methods, `BlockID.IsZero()` and `BlockID.IsComplete()` for thes
`BlockID.IsZero()` returns true for BlockID `b` if each of the following
are true:
```
```go
b.Hash == nil
b.PartsHeader.Total == 0
b.PartsHeader.Hash == nil
@ -73,7 +73,7 @@ b.PartsHeader.Hash == nil
`BlockID.IsComplete()` returns true for BlockID `b` if each of the following
are true:
```
```go
len(b.Hash) == 32
b.PartsHeader.Total > 0
len(b.PartsHeader.Hash) == 32
@ -85,13 +85,13 @@ The structure of a proposal for signing looks like:
```go
type CanonicalProposal struct {
Type SignedMsgType // type alias for byte
Height int64 `binary:"fixed64"`
Round int64 `binary:"fixed64"`
POLRound int64 `binary:"fixed64"`
BlockID BlockID
Timestamp time.Time
ChainID string
Type SignedMsgType // type alias for byte
Height int64 `binary:"fixed64"`
Round int64 `binary:"fixed64"`
POLRound int64 `binary:"fixed64"`
BlockID BlockID
Timestamp time.Time
ChainID string
}
```
@ -115,18 +115,18 @@ The structure of a vote for signing looks like:
```go
type CanonicalVote struct {
Type SignedMsgType // type alias for byte
Height int64 `binary:"fixed64"`
Round int64 `binary:"fixed64"`
BlockID BlockID
Timestamp time.Time
ChainID string
Type SignedMsgType // type alias for byte
Height int64 `binary:"fixed64"`
Round int64 `binary:"fixed64"`
BlockID BlockID
Timestamp time.Time
ChainID string
}
```
A vote is valid if each of the following lines evaluates to true for vote `v`:
```
```go
v.Type == 0x1 || v.Type == 0x2
v.Height > 0
v.Round >= 0
@ -157,9 +157,9 @@ Assume the signer keeps the following state, `s`:
```go
type LastSigned struct {
Height int64
Round int64
Type SignedMsgType // byte
Height int64
Round int64
Type SignedMsgType // byte
}
```
@ -175,7 +175,7 @@ s.Type = m.Type
A signer should only sign a proposal `p` if any of the following lines are true:
```
```go
p.Height > s.Height
p.Height == s.Height && p.Round > s.Round
```
@ -187,7 +187,7 @@ Once a proposal or vote has been signed for a given height and round, a proposal
A signer should only sign a vote `v` if any of the following lines are true:
```
```go
v.Height > s.Height
v.Height == s.Height && v.Round > s.Round
v.Height == s.Height && v.Round == s.Round && v.Step == 0x1 && s.Step == 0x20


+ 109
- 109
spec/core/data_structures.md View File

@ -36,29 +36,29 @@ the data in the current block, the previous block, and the results returned by t
```go
type Header struct {
// basic block info
Version Version
ChainID string
Height int64
Time Time
// basic block info
Version Version
ChainID string
Height int64
Time Time
// prev block info
LastBlockID BlockID
// prev block info
LastBlockID BlockID
// hashes of block data
LastCommitHash []byte // commit from validators from the last block
DataHash []byte // MerkleRoot of transaction hashes
// hashes of block data
LastCommitHash []byte // commit from validators from the last block
DataHash []byte // MerkleRoot of transaction hashes
// hashes from the app output from the prev block
ValidatorsHash []byte // validators for the current block
NextValidatorsHash []byte // validators for the next block
ConsensusHash []byte // consensus params for current block
AppHash []byte // state after txs from the previous block
LastResultsHash []byte // root hash of all results from the txs from the previous block
// hashes from the app output from the prev block
ValidatorsHash []byte // validators for the current block
NextValidatorsHash []byte // validators for the next block
ConsensusHash []byte // consensus params for current block
AppHash []byte // state after txs from the previous block
LastResultsHash []byte // root hash of all results from the txs from the previous block
// consensus info
EvidenceHash []byte // evidence included in the block
ProposerAddress []byte // original proposer of the block
// consensus info
EvidenceHash []byte // evidence included in the block
ProposerAddress []byte // original proposer of the block
```
Further details on each of these fields is described below.
@ -67,8 +67,8 @@ Further details on each of these fields is described below.
```go
type Version struct {
Block uint64
App uint64
Block uint64
App uint64
}
```
@ -111,7 +111,7 @@ format, which uses two integers, one for Seconds and for Nanoseconds.
Data is just a wrapper for a list of transactions, where transactions are
arbitrary byte arrays:
```
```go
type Data struct {
Txs [][]byte
}
@ -124,10 +124,10 @@ validator. It also contains the relevant BlockID, height and round:
```go
type Commit struct {
Height int64
Round int
BlockID BlockID
Signatures []CommitSig
Height int64
Round int
BlockID BlockID
Signatures []CommitSig
}
```
@ -141,19 +141,19 @@ to reconstruct the vote set given the validator set.
type BlockIDFlag byte
const (
// BlockIDFlagAbsent - no vote was received from a validator.
BlockIDFlagAbsent BlockIDFlag = 0x01
// BlockIDFlagCommit - voted for the Commit.BlockID.
BlockIDFlagCommit = 0x02
// BlockIDFlagNil - voted for nil.
BlockIDFlagNil = 0x03
// BlockIDFlagAbsent - no vote was received from a validator.
BlockIDFlagAbsent BlockIDFlag = 0x01
// BlockIDFlagCommit - voted for the Commit.BlockID.
BlockIDFlagCommit = 0x02
// BlockIDFlagNil - voted for nil.
BlockIDFlagNil = 0x03
)
type CommitSig struct {
BlockIDFlag BlockIDFlag
ValidatorAddress Address
Timestamp time.Time
Signature []byte
BlockIDFlag BlockIDFlag
ValidatorAddress Address
Timestamp time.Time
Signature []byte
}
```
@ -168,14 +168,14 @@ The vote includes information about the validator signing it.
```go
type Vote struct {
Type byte
Height int64
Round int
BlockID BlockID
Timestamp Time
ValidatorAddress []byte
ValidatorIndex int
Signature []byte
Type byte
Height int64
Round int
BlockID BlockID
Timestamp Time
ValidatorAddress []byte
ValidatorIndex int
Signature []byte
}
```
@ -193,7 +193,7 @@ See the [signature spec](./encoding.md#key-types) for more.
EvidenceData is a simple wrapper for a list of evidence:
```
```go
type EvidenceData struct {
Evidence []Evidence
}
@ -201,40 +201,40 @@ type EvidenceData struct {
## Evidence
Evidence in Tendermint is used to indicate breaches in the consensus by a validator.
Evidence in Tendermint is used to indicate breaches in the consensus by a validator.
It is implemented as the following interface.
```go
type Evidence interface {
Height() int64 // height of the equivocation
Time() time.Time // time of the equivocation
Address() []byte // address of the equivocating validator
Bytes() []byte // bytes which comprise the evidence
Hash() []byte // hash of the evidence
Verify(chainID string, pubKey crypto.PubKey) error // verify the evidence
Equal(Evidence) bool // check equality of evidence
ValidateBasic() error
String() string
Height() int64 // height of the equivocation
Time() time.Time // time of the equivocation
Address() []byte // address of the equivocating validator
Bytes() []byte // bytes which comprise the evidence
Hash() []byte // hash of the evidence
Verify(chainID string, pubKey crypto.PubKey) error // verify the evidence
Equal(Evidence) bool // check equality of evidence
ValidateBasic() error
String() string
}
```
All evidence can be encoded and decoded to and from Protobuf with the `EvidenceToProto()`
and `EvidenceFromProto()` functions. The [Fork Accountability](../consensus/light-client/accountability.md)
All evidence can be encoded and decoded to and from Protobuf with the `EvidenceToProto()`
and `EvidenceFromProto()` functions. The [Fork Accountability](../consensus/light-client/accountability.md)
document provides a good overview for the types of evidence and how they occur. For evidence to be committed onchain, it must adhere to the validation rules of each evidence and must not be expired. The expiration age, measured in both block height and time is set in `EvidenceParams`. Each evidence uses
the timestamp of the block that the evidence occured at to indicate the age of the evidence.
the timestamp of the block that the evidence occured at to indicate the age of the evidence.
### DuplicateVoteEvidence
### DuplicateVoteEvidence
`DuplicateVoteEvidence` represents a validator that has voted for two different blocks
`DuplicateVoteEvidence` represents a validator that has voted for two different blocks
in the same round of the same height. Votes are lexicographically sorted on `BlockID`.
```go
type DuplicateVoteEvidence struct {
VoteA *Vote
VoteB *Vote
Timestamp time.Time
VoteA *Vote
VoteB *Vote
Timestamp time.Time
}
```
@ -252,15 +252,15 @@ Valid Duplicate Vote Evidence must adhere to the following rules:
### AmensiaEvidence
`AmnesiaEvidence` represents a validator that has incorrectly voted for another block in a
`AmnesiaEvidence` represents a validator that has incorrectly voted for another block in a
different round to the the block that the validator was previously locked on. This form
of evidence is generated differently from the rest. See this
of evidence is generated differently from the rest. See this
[ADR](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-056-proving-amnesia-attacks.md) for more information.
```go
type AmnesiaEvidence struct {
*PotentialAmnesiaEvidence
Polc *ProofOfLockChange
*PotentialAmnesiaEvidence
Polc *ProofOfLockChange
}
```
@ -280,16 +280,16 @@ Valid Amnesia Evidence must adhere to the following rules:
### LunaticValidatorEvidence
`LunaticValidatorEvidence` represents a validator that has signed for an arbitrary application state.
`LunaticValidatorEvidence` represents a validator that has signed for an arbitrary application state.
This attack only applies to Light clients.
```go
type LunaticValidatorEvidence struct {
Header *Header
Vote *Vote
InvalidHeaderField string
Timestamp time.Time
Header *Header
Vote *Vote
InvalidHeaderField string
Timestamp time.Time
}
```
@ -330,7 +330,7 @@ A Header is valid if its corresponding fields are valid.
### Version
```
```go
block.Version.Block == state.Version.Consensus.Block
block.Version.App == state.Version.Consensus.App
```
@ -339,7 +339,7 @@ The block version must match consensus version from the state.
### ChainID
```
```go
len(block.ChainID) < 50
```
@ -357,7 +357,7 @@ The height is an incrementing integer. The first block has `block.Header.Height
### Time
```
```go
block.Header.Timestamp >= prevBlock.Header.Timestamp + state.consensusParams.Block.TimeIotaMs
block.Header.Timestamp == MedianTime(block.LastCommit, state.LastValidators)
```
@ -371,7 +371,7 @@ block being voted on.
The timestamp of the first block must be equal to the genesis time (since
there's no votes to compute the median).
```
```go
if block.Header.Height == state.InitialHeight {
block.Header.Timestamp == genesisTime
}
@ -543,21 +543,21 @@ using the given ChainID:
```go
func (vote *Vote) Verify(chainID string, pubKey crypto.PubKey) error {
if !bytes.Equal(pubKey.Address(), vote.ValidatorAddress) {
return ErrVoteInvalidValidatorAddress
}
if !pubKey.VerifyBytes(vote.SignBytes(chainID), vote.Signature) {
return ErrVoteInvalidSignature
}
return nil
if !bytes.Equal(pubKey.Address(), vote.ValidatorAddress) {
return ErrVoteInvalidValidatorAddress
}
if !pubKey.VerifyBytes(vote.SignBytes(chainID), vote.Signature) {
return ErrVoteInvalidSignature
}
return nil
}
```
where `pubKey.Verify` performs the appropriate digital signature verification of the `pubKey`
against the given signature and message bytes.
# Execution
## Execution
Once a block is validated, it can be executed against the state.
@ -574,26 +574,26 @@ set (TODO). Execute is defined as:
```go
func Execute(s State, app ABCIApp, block Block) State {
// Fuction ApplyBlock executes block of transactions against the app and returns the new root hash of the app state,
// modifications to the validator set and the changes of the consensus parameters.
AppHash, ValidatorChanges, ConsensusParamChanges := app.ApplyBlock(block)
nextConsensusParams := UpdateConsensusParams(state.ConsensusParams, ConsensusParamChanges)
return State{
ChainID: state.ChainID,
InitialHeight: state.InitialHeight,
LastResults: abciResponses.DeliverTxResults,
AppHash: AppHash,
InitialHeight: state.InitialHeight,
LastValidators: state.Validators,
Validators: state.NextValidators,
NextValidators: UpdateValidators(state.NextValidators, ValidatorChanges),
ConsensusParams: nextConsensusParams,
Version: {
Consensus: {
AppVersion: nextConsensusParams.Version.AppVersion,
},
},
}
// Fuction ApplyBlock executes block of transactions against the app and returns the new root hash of the app state,
// modifications to the validator set and the changes of the consensus parameters.
AppHash, ValidatorChanges, ConsensusParamChanges := app.ApplyBlock(block)
nextConsensusParams := UpdateConsensusParams(state.ConsensusParams, ConsensusParamChanges)
return State{
ChainID: state.ChainID,
InitialHeight: state.InitialHeight,
LastResults: abciResponses.DeliverTxResults,
AppHash: AppHash,
InitialHeight: state.InitialHeight,
LastValidators: state.Validators,
Validators: state.NextValidators,
NextValidators: UpdateValidators(state.NextValidators, ValidatorChanges),
ConsensusParams: nextConsensusParams,
Version: {
Consensus: {
AppVersion: nextConsensusParams.Version.AppVersion,
},
},
}
}
```

+ 46
- 46
spec/core/encoding.md View File

@ -86,7 +86,7 @@ TODO: pubkey
The address is the first 20-bytes of the SHA256 hash of the raw 32-byte public key:
```
```go
address = SHA256(pubkey)[:20]
```
@ -98,7 +98,7 @@ TODO: pubkey
The address is the first 20-bytes of the SHA256 hash of the raw 32-byte public key:
```
```go
address = SHA256(pubkey)[:20]
```
@ -110,7 +110,7 @@ TODO: pubkey
The address is the RIPEMD160 hash of the SHA256 hash of the OpenSSL compressed public key:
```
```go
address = RIPEMD160(SHA256(pubkey))
```
@ -194,7 +194,7 @@ The differences between RFC 6962 and the simplest form a merkle tree are that:
(The largest power of two less than the number of items) This allows new leaves to be added with less
recomputation. For example:
```
```md
Simple Tree with 6 items Simple Tree with 7 items
* *
@ -223,29 +223,29 @@ func emptyHash() []byte {
// SHA256(0x00 || leaf)
func leafHash(leaf []byte) []byte {
return tmhash.Sum(append(0x00, leaf...))
return tmhash.Sum(append(0x00, leaf...))
}
// SHA256(0x01 || left || right)
func innerHash(left []byte, right []byte) []byte {
return tmhash.Sum(append(0x01, append(left, right...)...))
return tmhash.Sum(append(0x01, append(left, right...)...))
}
// largest power of 2 less than k
func getSplitPoint(k int) { ... }
func MerkleRoot(items [][]byte) []byte{
switch len(items) {
case 0:
return empthHash()
case 1:
return leafHash(items[0])
default:
k := getSplitPoint(len(items))
left := MerkleRoot(items[:k])
right := MerkleRoot(items[k:])
return innerHash(left, right)
}
switch len(items) {
case 0:
return empthHash()
case 1:
return leafHash(items[0])
default:
k := getSplitPoint(len(items))
left := MerkleRoot(items[:k])
right := MerkleRoot(items[k:])
return innerHash(left, right)
}
}
```
@ -253,7 +253,7 @@ Note: `MerkleRoot` operates on items which are arbitrary byte arrays, not
necessarily hashes. For items which need to be hashed first, we introduce the
`Hashes` function:
```
```go
func Hashes(items [][]byte) [][]byte {
return SHA256 of each item
}
@ -281,31 +281,31 @@ Which is verified as follows:
```golang
func (proof SimpleProof) Verify(rootHash []byte, leaf []byte) bool {
assert(proof.LeafHash, leafHash(leaf)
assert(proof.LeafHash, leafHash(leaf)
computedHash := computeHashFromAunts(proof.Index, proof.Total, proof.LeafHash, proof.Aunts)
computedHash := computeHashFromAunts(proof.Index, proof.Total, proof.LeafHash, proof.Aunts)
return computedHash == rootHash
}
func computeHashFromAunts(index, total int, leafHash []byte, innerHashes [][]byte) []byte{
assert(index < total && index >= 0 && total > 0)
if total == 1{
assert(len(proof.Aunts) == 0)
return leafHash
}
assert(len(innerHashes) > 0)
numLeft := getSplitPoint(total) // largest power of 2 less than total
if index < numLeft {
leftHash := computeHashFromAunts(index, numLeft, leafHash, innerHashes[:len(innerHashes)-1])
assert(leftHash != nil)
return innerHash(leftHash, innerHashes[len(innerHashes)-1])
}
rightHash := computeHashFromAunts(index-numLeft, total-numLeft, leafHash, innerHashes[:len(innerHashes)-1])
assert(rightHash != nil)
return innerHash(innerHashes[len(innerHashes)-1], rightHash)
assert(index < total && index >= 0 && total > 0)
if total == 1{
assert(len(proof.Aunts) == 0)
return leafHash
}
assert(len(innerHashes) > 0)
numLeft := getSplitPoint(total) // largest power of 2 less than total
if index < numLeft {
leftHash := computeHashFromAunts(index, numLeft, leafHash, innerHashes[:len(innerHashes)-1])
assert(leftHash != nil)
return innerHash(leftHash, innerHashes[len(innerHashes)-1])
}
rightHash := computeHashFromAunts(index-numLeft, total-numLeft, leafHash, innerHashes[:len(innerHashes)-1])
assert(rightHash != nil)
return innerHash(innerHashes[len(innerHashes)-1], rightHash)
}
```
@ -323,7 +323,7 @@ Because Tendermint only uses a Simple Merkle Tree, application developers are ex
Amino also supports JSON encoding - registered types are simply encoded as:
```
```json
{
"type": "<amino type name>",
"value": <JSON>
@ -332,7 +332,7 @@ Amino also supports JSON encoding - registered types are simply encoded as:
For instance, an ED25519 PubKey would look like:
```
```json
{
"type": "tendermint/PubKeyEd25519",
"value": "uZ4h63OFWuQ36ZZ4Bd6NF+/w9fWUwrOncrQsackrsTk="
@ -353,12 +353,12 @@ We call this encoding the SignBytes. For instance, SignBytes for a vote is the A
```go
type CanonicalVote struct {
Type byte
Height int64 `binary:"fixed64"`
Round int64 `binary:"fixed64"`
BlockID CanonicalBlockID
Timestamp time.Time
ChainID string
Type byte
Height int64 `binary:"fixed64"`
Round int64 `binary:"fixed64"`
BlockID CanonicalBlockID
Timestamp time.Time
ChainID string
}
```


+ 22
- 22
spec/core/state.md View File

@ -50,8 +50,8 @@ application as two `uint64` values:
```go
type Consensus struct {
Block uint64
App uint64
Block uint64
App uint64
}
```
@ -112,43 +112,43 @@ evolve without breaking the header.
```go
type ConsensusParams struct {
Block
Evidence
Validator
Version
Block
Evidence
Validator
Version
}
type hashedParams struct {
BlockMaxBytes int64
BlockMaxGas int64
BlockMaxBytes int64
BlockMaxGas int64
}
func (params ConsensusParams) Hash() []byte {
SHA256(hashedParams{
BlockMaxBytes: params.Block.MaxBytes,
BlockMaxGas: params.Block.MaxGas,
})
SHA256(hashedParams{
BlockMaxBytes: params.Block.MaxBytes,
BlockMaxGas: params.Block.MaxGas,
})
}
type BlockParams struct {
MaxBytes int64
MaxGas int64
TimeIotaMs int64
MaxBytes int64
MaxGas int64
TimeIotaMs int64
}
type EvidenceParams struct {
MaxAgeNumBlocks int64
MaxAgeDuration time.Duration
MaxNum uint32
ProofTrialPeriod int64
MaxAgeNumBlocks int64
MaxAgeDuration time.Duration
MaxNum uint32
ProofTrialPeriod int64
}
type ValidatorParams struct {
PubKeyTypes []string
PubKeyTypes []string
}
type VersionParams struct {
AppVersion uint64
AppVersion uint64
}
```
@ -170,7 +170,7 @@ For evidence in a block to be valid, it must satisfy:
```go
block.Header.Time-evidence.Time < ConsensusParams.Evidence.MaxAgeDuration &&
block.Header.Height-evidence.Height < ConsensusParams.Evidence.MaxAgeNumBlocks
block.Header.Height-evidence.Height < ConsensusParams.Evidence.MaxAgeNumBlocks
```
#### Validator


+ 1
- 2
spec/p2p/config.md View File

@ -41,10 +41,9 @@ and that the node may not be able to keep the connection persistent.
These are IDs of the peers that we do not add to the address book or gossip to
other peers. They stay private to us.
## Unconditional Peers
`--p2p.unconditional_peer_ids “id100000000000000000000000000000000,id200000000000000000000000000000000”`
These are IDs of the peers which are allowed to be connected by both inbound or outbound regardless of
These are IDs of the peers which are allowed to be connected by both inbound or outbound regardless of
`max_num_inbound_peers` or `max_num_outbound_peers` of user's node reached or not.

+ 4
- 4
spec/p2p/connection.md View File

@ -30,11 +30,11 @@ If a pong or message is not received in sufficient time after a ping, the peer i
Messages in channels are chopped into smaller `msgPacket`s for multiplexing.
```
```go
type msgPacket struct {
ChannelID byte
EOF byte // 1 means message ends here.
Bytes []byte
ChannelID byte
EOF byte // 1 means message ends here.
Bytes []byte
}
```


+ 10
- 10
spec/p2p/peer.md View File

@ -29,10 +29,10 @@ Both handshakes have configurable timeouts (they should complete quickly).
Tendermint implements the Station-to-Station protocol
using X25519 keys for Diffie-Helman key-exchange and chacha20poly1305 for encryption.
Previous versions of this protocol suffered from malleability attacks whereas an active man
Previous versions of this protocol suffered from malleability attacks whereas an active man
in the middle attacker could compromise confidentiality as decribed in [Prime, Order Please!
Revisiting Small Subgroup and Invalid Curve Attacks on
Protocols using Diffie-Hellman](https://eprint.iacr.org/2019/526.pdf).
Protocols using Diffie-Hellman](https://eprint.iacr.org/2019/526.pdf).
We have added dependency on the Merlin a keccak based transcript hashing protocol to ensure non-malleability.
@ -46,10 +46,10 @@ It goes as follows:
- compute the Diffie-Hellman shared secret using the peers ephemeral public key and our ephemeral private key
- add the DH secret to the transcript labeled DH_SECRET.
- generate two keys to use for encryption (sending and receiving) and a challenge for authentication as follows:
- create a hkdf-sha256 instance with the key being the diffie hellman shared secret, and info parameter as
- create a hkdf-sha256 instance with the key being the diffie hellman shared secret, and info parameter as
`TENDERMINT_SECRET_CONNECTION_KEY_AND_CHALLENGE_GEN`
- get 64 bytes of output from hkdf-sha256
- if we had the smaller ephemeral pubkey, use the first 32 bytes for the key for receiving, the second 32 bytes for sending; else the opposite.
- get 64 bytes of output from hkdf-sha256
- if we had the smaller ephemeral pubkey, use the first 32 bytes for the key for receiving, the second 32 bytes for sending; else the opposite.
- use a separate nonce for receiving and sending. Both nonces start at 0, and should support the full 96 bit nonce range
- all communications from now on are encrypted in 1024 byte frames,
using the respective secret and nonce. Each nonce is incremented by one after each use.
@ -99,14 +99,14 @@ type NodeInfo struct {
}
type Version struct {
P2P uint64
Block uint64
App uint64
P2P uint64
Block uint64
App uint64
}
type NodeInfoOther struct {
TxIndex string
RPCAddress string
TxIndex string
RPCAddress string
}
```


+ 85
- 64
spec/reactors/block_sync/bcv1/impl-v1.md View File

@ -1,11 +1,13 @@
# Blockchain Reactor v1
### Data Structures
## Data Structures
The data structures used are illustrated below.
![Data Structures](img/bc-reactor-new-datastructs.png)
#### BlockchainReactor
### BlockchainReactor
- is a `p2p.BaseReactor`.
- has a `store.BlockStore` for persistence.
- executes blocks using an `sm.BlockExecutor`.
@ -17,33 +19,34 @@ The data structures used are illustrated below.
```go
type BlockchainReactor struct {
p2p.BaseReactor
p2p.BaseReactor
initialState sm.State // immutable
state sm.State
initialState sm.State // immutable
state sm.State
blockExec *sm.BlockExecutor
store *store.BlockStore
blockExec *sm.BlockExecutor
store *store.BlockStore
fastSync bool
fastSync bool
fsm *BcReactorFSM
blocksSynced int
fsm *BcReactorFSM
blocksSynced int
// Receive goroutine forwards messages to this channel to be processed in the context of the poolRoutine.
messagesForFSMCh chan bcReactorMessage
// Receive goroutine forwards messages to this channel to be processed in the context of the poolRoutine.
messagesForFSMCh chan bcReactorMessage
// Switch goroutine may send RemovePeer to the blockchain reactor. This is an error message that is relayed
// to this channel to be processed in the context of the poolRoutine.
errorsForFSMCh chan bcReactorMessage
// Switch goroutine may send RemovePeer to the blockchain reactor. This is an error message that is relayed
// to this channel to be processed in the context of the poolRoutine.
errorsForFSMCh chan bcReactorMessage
// This channel is used by the FSM and indirectly the block pool to report errors to the blockchain reactor and
// the switch.
eventsFromFSMCh chan bcFsmMessage
// This channel is used by the FSM and indirectly the block pool to report errors to the blockchain reactor and
// the switch.
eventsFromFSMCh chan bcFsmMessage
}
```
#### BcReactorFSM
- implements a simple finite state machine.
- has a state and a state timer.
- has a `BlockPool` to keep track of block requests sent to peers and blocks received from peers.
@ -51,49 +54,53 @@ type BlockchainReactor struct {
```go
type BcReactorFSM struct {
logger log.Logger
mtx sync.Mutex
logger log.Logger
mtx sync.Mutex
startTime time.Time
startTime time.Time
state *bcReactorFSMState
stateTimer *time.Timer
pool *BlockPool
state *bcReactorFSMState
stateTimer *time.Timer
pool *BlockPool
// interface used to call the Blockchain reactor to send StatusRequest, BlockRequest, reporting errors, etc.
toBcR bcReactor
// interface used to call the Blockchain reactor to send StatusRequest, BlockRequest, reporting errors, etc.
toBcR bcReactor
}
```
#### BlockPool
- maintains a peer set, implemented as a map of peer ID to `BpPeer`.
- maintains a set of requests made to peers, implemented as a map of block request heights to peer IDs.
- maintains a list of future block requests needed to advance the fast-sync. This is a list of block heights.
- maintains a list of future block requests needed to advance the fast-sync. This is a list of block heights.
- keeps track of the maximum height of the peers in the set.
- uses an interface to send requests and report errors to the reactor (via FSM).
```go
type BlockPool struct {
logger log.Logger
// Set of peers that have sent status responses, with height bigger than pool.Height
peers map[p2p.ID]*BpPeer
// Set of block heights and the corresponding peers from where a block response is expected or has been received.
blocks map[int64]p2p.ID
plannedRequests map[int64]struct{} // list of blocks to be assigned peers for blockRequest
nextRequestHeight int64 // next height to be added to plannedRequests
Height int64 // height of next block to execute
MaxPeerHeight int64 // maximum height of all peers
toBcR bcReactor
logger log.Logger
// Set of peers that have sent status responses, with height bigger than pool.Height
peers map[p2p.ID]*BpPeer
// Set of block heights and the corresponding peers from where a block response is expected or has been received.
blocks map[int64]p2p.ID
plannedRequests map[int64]struct{} // list of blocks to be assigned peers for blockRequest
nextRequestHeight int64 // next height to be added to plannedRequests
Height int64 // height of next block to execute
MaxPeerHeight int64 // maximum height of all peers
toBcR bcReactor
}
```
Some reasons for the `BlockPool` data structure content:
1. If a peer is removed by the switch fast access is required to the peer and the block requests made to that peer in order to redo them.
2. When block verification fails fast access is required from the block height to the peer and the block requests made to that peer in order to redo them.
3. The `BlockchainReactor` main routine decides when the block pool is running low and asks the `BlockPool` (via FSM) to make more requests. The `BlockPool` creates a list of requests and triggers the sending of the block requests (via the interface). The reason it maintains a list of requests is the redo operations that may occur during error handling. These are redone when the `BlockchainReactor` requires more blocks.
#### BpPeer
- keeps track of a single peer, with height bigger than the initial height.
- maintains the block requests made to the peer and the blocks received from the peer until they are executed.
- monitors the peer speed when there are pending requests.
@ -101,17 +108,17 @@ Some reasons for the `BlockPool` data structure content:
```go
type BpPeer struct {
logger log.Logger
ID p2p.ID
logger log.Logger
ID p2p.ID
Height int64 // the peer reported height
NumPendingBlockRequests int // number of requests still waiting for block responses
blocks map[int64]*types.Block // blocks received or expected to be received from this peer
blockResponseTimer *time.Timer
recvMonitor *flow.Monitor
params *BpPeerParams // parameters for timer and monitor
Height int64 // the peer reported height
NumPendingBlockRequests int // number of requests still waiting for block responses
blocks map[int64]*types.Block // blocks received or expected to be received from this peer
blockResponseTimer *time.Timer
recvMonitor *flow.Monitor
params *BpPeerParams // parameters for timer and monitor
onErr func(err error, peerID p2p.ID) // function to call on error
onErr func(err error, peerID p2p.ID) // function to call on error
}
```
@ -120,61 +127,73 @@ type BpPeer struct {
The diagram below shows the goroutines (depicted by the gray blocks), timers (shown on the left with their values) and channels (colored rectangles). The FSM box shows some of the functionality and it is not a separate goroutine.
The interface used by the FSM is shown in light red with the `IF` block. This is used to:
- send block requests
- send block requests
- report peer errors to the switch - this results in the reactor calling `switch.StopPeerForError()` and, if triggered by the peer timeout routine, a `removePeerEv` is sent to the FSM and action is taken from the context of the `poolRoutine()`
- ask the reactor to reset the state timers. The timers are owned by the FSM while the timeout routine is defined by the reactor. This was done in order to avoid running timers in tests and will change in the next revision.
There are two main goroutines implemented by the blockchain reactor. All I/O operations are performed from the `poolRoutine()` context while the CPU intensive operations related to the block execution are performed from the context of the `executeBlocksRoutine()`. All goroutines are detailed in the next sections.
![Go Routines Diagram](img/bc-reactor-new-goroutines.png)
#### Receive()
Fast-sync messages from peers are received by this goroutine. It performs basic validation and:
- in helper mode (i.e. for request message) it replies immediately. This is different than the proposal in adr-040 that specifies having the FSM handling these.
- forwards response messages to the `poolRoutine()`.
#### poolRoutine()
(named kept as in the previous reactor).
(named kept as in the previous reactor).
It starts the `executeBlocksRoutine()` and the FSM. It then waits in a loop for events. These are received from the following channels:
- `sendBlockRequestTicker.C` - every 10msec the reactor asks FSM to make more block requests up to a maximum. Note: currently this value is constant but could be changed based on low/ high watermark thresholds for the number of blocks received and waiting to be processed, the number of blockResponse messages waiting in messagesForFSMCh, etc.
- `statusUpdateTicker.C` - every 10 seconds the reactor broadcasts status requests to peers. While adr-040 specifies this to run within the FSM, at this point this functionality is kept in the reactor.
- `messagesForFSMCh` - the `Receive()` goroutine sends status and block response messages to this channel and the reactor calls FSM to handle them.
- `errorsForFSMCh` - this channel receives the following events:
- `errorsForFSMCh` - this channel receives the following events:
- peer remove - when the switch removes a peer
- sate timeout event - when FSM state timers trigger
The reactor forwards this messages to the FSM.
- `eventsFromFSMCh` - there are two type of events sent over this channel:
- `syncFinishedEv` - triggered when FSM enters `finished` state and calls the switchToConsensus() interface function.
- `peerErrorEv`- peer timer expiry goroutine sends this event over the channel for processing from poolRoutine() context.
#### executeBlocksRoutine()
Started by the `poolRoutine()`, it retrieves blocks from the pool and executes them:
- `processReceivedBlockTicker.C` - a ticker event is received over the channel every 10msec and its handling results in a signal being sent to the doProcessBlockCh channel.
- doProcessBlockCh - events are received on this channel as described as above and upon processing blocks are retrieved from the pool and executed.
- `processReceivedBlockTicker.C` - a ticker event is received over the channel every 10msec and its handling results in a signal being sent to the doProcessBlockCh channel.
- doProcessBlockCh - events are received on this channel as described as above and upon processing blocks are retrieved from the pool and executed.
### FSM
![fsm](img/bc-reactor-new-fsm.png)
#### States
##### init (aka unknown)
The FSM is created in `unknown` state. When started, by the reactor (`startFSMEv`), it broadcasts Status requests and transitions to `waitForPeer` state.
##### waitForPeer
In this state, the FSM waits for a Status responses from a "tall" peer. A timer is running in this state to allow the FSM to finish if there are no useful peers.
If the timer expires, it moves to `finished` state and calls the reactor to switch to consensus.
If a Status response is received from a peer within the timeout, the FSM transitions to `waitForBlock` state.
##### waitForBlock
In this state the FSM makes Block requests (triggered by a ticker in reactor) and waits for Block responses. There is a timer running in this state to detect if a peer is not sending the block at current processing height. If the timer expires, the FSM removes the peer where the request was sent and all requests made to that peer are redone.
As blocks are received they are stored by the pool. Block execution is independently performed by the reactor and the result reported to the FSM:
- if there are no errors, the FSM increases the pool height and resets the state timer.
- if there are errors, the peers that delivered the two blocks (at height and height+1) are removed and the requests redone.
In this state the FSM may receive peer remove events in any of the following scenarios:
In this state the FSM may receive peer remove events in any of the following scenarios:
- the switch is removing a peer
- a peer is penalized because it has not responded to some block requests for a long time
- a peer is penalized for being slow
@ -183,6 +202,7 @@ When processing of the last block (the one with height equal to the highest peer
If after a peer update or removal the pool height is same as maxPeerHeight, the FSM transitions to `finished` state.
##### finished
When entering this state, the FSM calls the reactor to switch to consensus and performs cleanup.
#### Events
@ -191,18 +211,19 @@ The following events are handled by the FSM:
```go
const (
startFSMEv = iota + 1
statusResponseEv
blockResponseEv
processedBlockEv
makeRequestsEv
stopFSMEv
peerRemoveEv = iota + 256
stateTimeoutEv
startFSMEv = iota + 1
statusResponseEv
blockResponseEv
processedBlockEv
makeRequestsEv
stopFSMEv
peerRemoveEv = iota + 256
stateTimeoutEv
)
```
### Examples of Scenarios and Termination Handling
A few scenarios are covered in this section together with the current/ proposed handling.
In general, the scenarios involving faulty peers are made worse by the fact that they may quickly be re-added.


+ 20
- 21
spec/reactors/block_sync/impl.md View File

@ -1,6 +1,6 @@
## Blockchain Reactor v0 Modules
# Blockchain Reactor v0 Module
### Blockchain Reactor
## Blockchain Reactor
- coordinates the pool for syncing
- coordinates the store for persistence
@ -10,35 +10,34 @@
- starts the pool.Start() and its poolRoutine()
- registers all the concrete types and interfaces for serialisation
#### poolRoutine
### poolRoutine
- listens to these channels:
- pool requests blocks from a specific peer by posting to requestsCh, block reactor then sends
- pool requests blocks from a specific peer by posting to requestsCh, block reactor then sends
a &bcBlockRequestMessage for a specific height
- pool signals timeout of a specific peer by posting to timeoutsCh
- switchToConsensusTicker to periodically try and switch to consensus
- trySyncTicker to periodically check if we have fallen behind and then catch-up sync
- if there aren't any new blocks available on the pool it skips syncing
- pool signals timeout of a specific peer by posting to timeoutsCh
- switchToConsensusTicker to periodically try and switch to consensus
- trySyncTicker to periodically check if we have fallen behind and then catch-up sync
- if there aren't any new blocks available on the pool it skips syncing
- tries to sync the app by taking downloaded blocks from the pool, gives them to the app and stores
them on disk
- implements Receive which is called by the switch/peer
- calls AddBlock on the pool when it receives a new block from a peer
- calls AddBlock on the pool when it receives a new block from a peer
### Block Pool
## Block Pool
- responsible for downloading blocks from peers
- makeRequestersRoutine()
- removes timeout peers
- starts new requesters by calling makeNextRequester()
- removes timeout peers
- starts new requesters by calling makeNextRequester()
- requestRoutine():
- picks a peer and sends the request, then blocks until:
- pool is stopped by listening to pool.Quit
- requester is stopped by listening to Quit
- request is redone
- we receive a block
- gotBlockCh is strange
### Go Routines in Blockchain Reactor
- picks a peer and sends the request, then blocks until:
- pool is stopped by listening to pool.Quit
- requester is stopped by listening to Quit
- request is redone
- we receive a block
- gotBlockCh is strange
## Go Routines in Blockchain Reactor
![Go Routines Diagram](img/bc-reactor-routines.png)

+ 16
- 16
spec/reactors/block_sync/reactor.md View File

@ -186,7 +186,7 @@ fetchBlock(height, pool):
mtx.Lock()
pool.numPending++
redo = true
mtx.UnLock()
mtx.UnLock()
}
}
}
@ -251,23 +251,23 @@ main(pool):
while true do
select {
upon receiving BlockRequest(Height, Peer) on pool.requestsChannel:
try to send bcBlockRequestMessage(Height) to Peer
upon receiving BlockRequest(Height, Peer) on pool.requestsChannel:
try to send bcBlockRequestMessage(Height) to Peer
upon receiving error(peer) on errorsChannel:
stop peer for error
upon receiving error(peer) on errorsChannel:
stop peer for error
upon receiving message on statusUpdateTickerChannel:
broadcast bcStatusRequestMessage(bcR.store.Height) // message sent in a separate routine
upon receiving message on statusUpdateTickerChannel:
broadcast bcStatusRequestMessage(bcR.store.Height) // message sent in a separate routine
upon receiving message on switchToConsensusTickerChannel:
pool.mtx.Lock()
receivedBlockOrTimedOut = pool.height > 0 || (time.Now() - pool.startTime) > 5 Seconds
ourChainIsLongestAmongPeers = pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
haveSomePeers = size of pool.peers > 0
pool.mtx.Unlock()
if haveSomePeers && receivedBlockOrTimedOut && ourChainIsLongestAmongPeers then
switch to consensus mode
upon receiving message on switchToConsensusTickerChannel:
pool.mtx.Lock()
receivedBlockOrTimedOut = pool.height > 0 || (time.Now() - pool.startTime) > 5 Seconds
ourChainIsLongestAmongPeers = pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
haveSomePeers = size of pool.peers > 0
pool.mtx.Unlock()
if haveSomePeers && receivedBlockOrTimedOut && ourChainIsLongestAmongPeers then
switch to consensus mode
upon receiving message on trySyncTickerChannel:
for i = 0; i < 10; i++ do
@ -294,7 +294,7 @@ main(pool):
redoRequestsForPeer(pool, peerId):
for each requester in pool.requesters do
if requester.getPeerID() == peerID
enqueue msg on redoChannel for requester
enqueue msg on redoChannel for requester
```
## Channels


+ 48
- 46
spec/reactors/consensus/consensus-reactor.md View File

@ -42,19 +42,19 @@ received votes and last commit and last validators set.
```go
type RoundState struct {
Height int64
Round int
Step RoundStepType
Validators ValidatorSet
Proposal Proposal
ProposalBlock Block
ProposalBlockParts PartSet
LockedRound int
LockedBlock Block
LockedBlockParts PartSet
Votes HeightVoteSet
LastCommit VoteSet
LastValidators ValidatorSet
Height int64
Round int
Step RoundStepType
Validators ValidatorSet
Proposal Proposal
ProposalBlock Block
ProposalBlockParts PartSet
LockedRound int
LockedBlock Block
LockedBlockParts PartSet
Votes HeightVoteSet
LastCommit VoteSet
LastValidators ValidatorSet
}
```
@ -77,20 +77,20 @@ Consensus Reactor and by the gossip routines upon sending a message to the peer.
```golang
type PeerRoundState struct {
Height int64 // Height peer is at
Round int // Round peer is at, -1 if unknown.
Step RoundStepType // Step peer is at
Proposal bool // True if peer has proposal for this round
ProposalBlockPartsHeader PartSetHeader
ProposalBlockParts BitArray
ProposalPOLRound int // Proposal's POL round. -1 if none.
ProposalPOL BitArray // nil until ProposalPOLMessage received.
Prevotes BitArray // All votes peer has for this round
Precommits BitArray // All precommits peer has for this round
LastCommitRound int // Round of commit for last height. -1 if none.
LastCommit BitArray // All commit precommits of commit for last height.
CatchupCommitRound int // Round that we have commit for. Not necessarily unique. -1 if none.
CatchupCommit BitArray // All commit precommits peer has for this height & CatchupCommitRound
Height int64 // Height peer is at
Round int // Round peer is at, -1 if unknown.
Step RoundStepType // Step peer is at
Proposal bool // True if peer has proposal for this round
ProposalBlockPartsHeader PartSetHeader
ProposalBlockParts BitArray
ProposalPOLRound int // Proposal's POL round. -1 if none.
ProposalPOL BitArray // nil until ProposalPOLMessage received.
Prevotes BitArray // All votes peer has for this round
Precommits BitArray // All precommits peer has for this round
LastCommitRound int // Round of commit for last height. -1 if none.
LastCommit BitArray // All commit precommits of commit for last height.
CatchupCommitRound int // Round that we have commit for. Not necessarily unique. -1 if none.
CatchupCommit BitArray // All commit precommits peer has for this height & CatchupCommitRound
}
```
@ -106,7 +106,7 @@ respectively.
### NewRoundStepMessage handler
```
```go
handleMessage(msg):
if msg is from smaller height/round/step then return
// Just remember these values.
@ -123,17 +123,17 @@ handleMessage(msg):
if prs.Height has been updated then
if prsHeight+1 == msg.Height && prsRound == msg.LastCommitRound then
prs.LastCommitRound = msg.LastCommitRound
prs.LastCommit = prs.Precommits
prs.LastCommit = prs.Precommits
} else {
prs.LastCommitRound = msg.LastCommitRound
prs.LastCommit = nil
prs.LastCommit = nil
}
Reset prs.CatchupCommitRound and prs.CatchupCommit
```
### NewValidBlockMessage handler
```
```go
handleMessage(msg):
if prs.Height != msg.Height then return
@ -148,7 +148,7 @@ protect the node against DOS attacks.
### HasVoteMessage handler
```
```go
handleMessage(msg):
if prs.Height == msg.Height then
prs.setHasVote(msg.Height, msg.Round, msg.Type, msg.Index)
@ -156,7 +156,7 @@ handleMessage(msg):
### VoteSetMaj23Message handler
```
```go
handleMessage(msg):
if prs.Height == msg.Height then
Record in rs that a peer claim to have ⅔ majority for msg.BlockID
@ -165,7 +165,7 @@ handleMessage(msg):
### ProposalMessage handler
```
```go
handleMessage(msg):
if prs.Height != msg.Height || prs.Round != msg.Round || prs.Proposal then return
prs.Proposal = true
@ -178,7 +178,7 @@ handleMessage(msg):
### ProposalPOLMessage handler
```
```go
handleMessage(msg):
if prs.Height != msg.Height or prs.ProposalPOLRound != msg.ProposalPOLRound then return
prs.ProposalPOL = msg.ProposalPOL
@ -189,7 +189,7 @@ node against DOS attacks.
### BlockPartMessage handler
```
```go
handleMessage(msg):
if prs.Height != msg.Height || prs.Round != msg.Round then return
Record in prs that peer has block part msg.Part.Index
@ -198,7 +198,7 @@ handleMessage(msg):
### VoteMessage handler
```
```go
handleMessage(msg):
Record in prs that a peer knows vote with index msg.vote.ValidatorIndex for particular height and round
Send msg trough internal peerMsgQueue to ConsensusState service
@ -206,7 +206,7 @@ handleMessage(msg):
### VoteSetBitsMessage handler
```
```go
handleMessage(msg):
Update prs for the bit-array of votes peer claims to have for the msg.BlockID
```
@ -220,12 +220,12 @@ It is used to send the following messages to the peer: `BlockPartMessage`, `Prop
`ProposalPOLMessage` on the DataChannel. The gossip data routine is based on the local RoundState (`rs`)
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
```
```go
1a) if rs.ProposalBlockPartsHeader == prs.ProposalBlockPartsHeader and the peer does not have all the proposal parts then
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(rs.Height, rs.Round, Part) to the peer on the DataChannel
if send returns true, record that the peer knows the corresponding block Part
Continue
Continue
1b) if (0 < prs.Height) and (prs.Height < rs.Height) then
help peer catch up using gossipDataForCatchup function
@ -239,8 +239,8 @@ and the known PeerRoundState (`prs`). The routine repeats forever the logic show
1d) if (rs.Proposal != nil and !prs.Proposal) then
Send ProposalMessage(rs.Proposal) to the peer
if send returns true, record that the peer knows Proposal
if 0 <= rs.Proposal.POLRound then
polRound = rs.Proposal.POLRound
if 0 <= rs.Proposal.POLRound then
polRound = rs.Proposal.POLRound
prevotesBitArray = rs.Votes.Prevotes(polRound).BitArray()
Send ProposalPOLMessage(rs.Height, polRound, prevotesBitArray)
Continue
@ -253,16 +253,18 @@ and the known PeerRoundState (`prs`). The routine repeats forever the logic show
This function is responsible for helping peer catch up if it is at the smaller height (prs.Height < rs.Height).
The function executes the following logic:
```go
if peer does not have all block parts for prs.ProposalBlockPart then
blockMeta = Load Block Metadata for height prs.Height from blockStore
if (!blockMeta.BlockID.PartsHeader == prs.ProposalBlockPartsHeader) then
Sleep PeerGossipSleepDuration
return
return
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(prs.Height, prs.Round, Part) to the peer on the DataChannel
if send returns true, record that the peer knows the corresponding block Part
return
else Sleep PeerGossipSleepDuration
```
## Gossip Votes Routine
@ -270,7 +272,7 @@ It is used to send the following message: `VoteMessage` on the VoteChannel.
The gossip votes routine is based on the local RoundState (`rs`)
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
```
```go
1a) if rs.Height == prs.Height then
if prs.Step == RoundStepNewHeight then
vote = random vote from rs.LastCommit the peer does not have
@ -284,7 +286,7 @@ and the known PeerRoundState (`prs`). The routine repeats forever the logic show
if send returns true, continue
if prs.Step <= RoundStepPrecommit and prs.Round != -1 and prs.Round <= rs.Round then
Precommits = rs.Votes.Precommits(prs.Round)
Precommits = rs.Votes.Precommits(prs.Round)
vote = random vote from Precommits the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
@ -315,7 +317,7 @@ It is used to send the following message: `VoteSetMaj23Message`. `VoteSetMaj23Me
BlockID has seen +2/3 votes. This routine is based on the local RoundState (`rs`) and the known PeerRoundState
(`prs`). The routine repeats forever the logic shown below.
```
```go
1a) if rs.Height == prs.Height then
Prevotes = rs.Votes.Prevotes(prs.Round)
if there is a ⅔ majority for some blockId in Prevotes then


+ 7
- 7
spec/reactors/consensus/consensus.md View File

@ -16,7 +16,7 @@ explained in a forthcoming document.
For efficiency reasons, validators in Tendermint consensus protocol do not agree directly on the
block as the block size is big, i.e., they don't embed the block inside `Proposal` and
`VoteMessage`. Instead, they reach agreement on the `BlockID` (see `BlockID` definition in
[Blockchain](https://github.com/tendermint/spec/blob/master/spec/core/data_structures.md#blockid) section)
[Blockchain](https://github.com/tendermint/spec/blob/master/spec/core/data_structures.md#blockid) section)
that uniquely identifies each block. The block itself is
disseminated to validator processes using peer-to-peer gossiping protocol. It starts by having a
proposer first splitting a block into a number of block parts, that are then gossiped between
@ -49,7 +49,7 @@ type ProposalMessage struct {
Proposal contains height and round for which this proposal is made, BlockID as a unique identifier
of proposed block, timestamp, and POLRound (a so-called Proof-of-Lock (POL) round) that is needed for
termination of the consensus. If POLRound >= 0, then BlockID corresponds to the block that
termination of the consensus. If POLRound >= 0, then BlockID corresponds to the block that
is locked in POLRound. The message is signed by the validator private key.
```go
@ -66,8 +66,8 @@ type Proposal struct {
## VoteMessage
VoteMessage is sent to vote for some block (or to inform others that a process does not vote in the
current round). Vote is defined in the
[Blockchain](https://github.com/tendermint/spec/blob/master/spec/core/data_structures.md#blockidd)
current round). Vote is defined in the
[Blockchain](https://github.com/tendermint/spec/blob/master/spec/core/data_structures.md#blockidd)
section and contains validator's
information (validator address and index), height and round for which the vote is sent, vote type,
blockID if process vote for some block (`nil` otherwise) and a timestamp when the vote is sent. The
@ -110,9 +110,9 @@ type NewRoundStepMessage struct {
## NewValidBlockMessage
NewValidBlockMessage is sent when a validator observes a valid block B in some round r,
NewValidBlockMessage is sent when a validator observes a valid block B in some round r,
i.e., there is a Proposal for block B and 2/3+ prevotes for the block B in the round r.
It contains height and round in which valid block is observed, block parts header that describes
It contains height and round in which valid block is observed, block parts header that describes
the valid block and is used to obtain all
block parts, and a bit array of the block parts a process currently has, so its peers can know what
parts it is missing so they can send them.
@ -121,7 +121,7 @@ In case the block is also committed, then IsCommit flag is set to true.
```go
type NewValidBlockMessage struct {
Height int64
Round int
Round int
BlockPartsHeader PartSetHeader
BlockParts BitArray
IsCommit bool


+ 78
- 50
spec/reactors/consensus/proposer-selection.md View File

@ -4,41 +4,46 @@ This document specifies the Proposer Selection Procedure that is used in Tenderm
As Tendermint is “leader-based protocol”, the proposer selection is critical for its correct functioning.
At a given block height, the proposer selection algorithm runs with the same validator set at each round .
Between heights, an updated validator set may be specified by the application as part of the ABCIResponses' EndBlock.
Between heights, an updated validator set may be specified by the application as part of the ABCIResponses' EndBlock.
## Requirements for Proposer Selection
This sections covers the requirements with Rx being mandatory and Ox optional requirements.
The following requirements must be met by the Proposer Selection procedure:
#### R1: Determinism
### R1: Determinism
Given a validator set `V`, and two honest validators `p` and `q`, for each height `h` and each round `r` the following must hold:
`proposer_p(h,r) = proposer_q(h,r)`
where `proposer_p(h,r)` is the proposer returned by the Proposer Selection Procedure at process `p`, at height `h` and round `r`.
#### R2: Fairness
### R2: Fairness
Given a validator set with total voting power P and a sequence S of elections. In any sub-sequence of S with length C*P, a validator v must be elected as proposer P/VP(v) times, i.e. with frequency:
f(v) ~ VP(v) / P
where C is a tolerance factor for validator set changes with following values:
- C == 1 if there are no validator set changes
- C ~ k when there are validator changes
- C ~ k when there are validator changes
*[this needs more work]*
### Basic Algorithm
## Basic Algorithm
At its core, the proposer selection procedure uses a weighted round-robin algorithm.
A model that gives a good intuition on how/ why the selection algorithm works and it is fair is that of a priority queue. The validators move ahead in this queue according to their voting power (the higher the voting power the faster a validator moves towards the head of the queue). When the algorithm runs the following happens:
- all validators move "ahead" according to their powers: for each validator, increase the priority by the voting power
- first in the queue becomes the proposer: select the validator with highest priority
- first in the queue becomes the proposer: select the validator with highest priority
- move the proposer back in the queue: decrease the proposer's priority by the total voting power
Notation:
- vset - the validator set
- n - the number of validators
- VP(i) - voting power of validator i
@ -49,7 +54,7 @@ Notation:
Simple view at the Selection Algorithm:
```
```md
def ProposerSelection (vset):
// compute priorities and elect proposer
@ -59,16 +64,16 @@ Simple view at the Selection Algorithm:
A(prop) -= P
```
### Stable Set
## Stable Set
Consider the validator set:
Validator | p1| p2
Validator | p1| p2
----------|---|---
VP | 1 | 3
Assuming no validator changes, the following table shows the proposer priority computation over a few runs. Four runs of the selection procedure are shown, starting with the 5th the same values are computed.
Each row shows the priority queue and the process place in it. The proposer is the closest to the head, the rightmost validator. As priorities are updated, the validators move right in the queue. The proposer moves left as its priority is reduced after election.
Each row shows the priority queue and the process place in it. The proposer is the closest to the head, the rightmost validator. As priorities are updated, the validators move right in the queue. The proposer moves left as its priority is reduced after election.
|Priority Run | -2| -1| 0 | 1| 2 | 3 | 4 | 5 | Alg step
|--------------- |---|---|---- |---|---- |---|---|---|--------
@ -83,20 +88,23 @@ Each row shows the priority queue and the process place in it. The proposer is t
| | | |p1,p2| | | | | |A(p2)-= P
It can be shown that:
- At the end of each run k+1 the sum of the priorities is the same as at end of run k. If a new set's priorities are initialized to 0 then the sum of priorities will be 0 at each run while there are no changes.
- The max distance between priorites is (n-1) * P. *[formal proof not finished]*
- The max distance between priorites is (n-1) *P.*[formal proof not finished]*
## Validator Set Changes
### Validator Set Changes
Between proposer selection runs the validator set may change. Some changes have implications on the proposer election.
#### Voting Power Change
### Voting Power Change
Consider again the earlier example and assume that the voting power of p1 is changed to 4:
Validator | p1| p2
Validator | p1| p2
----------|---| ---
VP | 4 | 3
Let's also assume that before this change the proposer priorites were as shown in first row (last run). As it can be seen, the selection could run again, without changes, as before.
Let's also assume that before this change the proposer priorites were as shown in first row (last run). As it can be seen, the selection could run again, without changes, as before.
|Priority Run| -2 | -1 | 0 | 1 | 2 | Comment
|--------------| ---|--- |------|--- |--- |--------
@ -107,20 +115,22 @@ Let's also assume that before this change the proposer priorites were as shown i
However, when a validator changes power from a high to a low value, some other validator remain far back in the queue for a long time. This scenario is considered again in the Proposer Priority Range section.
As before:
- At the end of each run k+1 the sum of the priorities is the same as at run k.
- The max distance between priorites is (n-1) * P.
#### Validator Removal
### Validator Removal
Consider a new example with set:
Validator | p1 | p2 | p3 |
--------- |--- |--- |--- |
VP | 1 | 2 | 3 |
Let's assume that after the last run the proposer priorities were as shown in first row with their sum being 0. After p2 is removed, at the end of next proposer selection run (penultimate row) the sum of priorities is -2 (minus the priority of the removed process).
Let's assume that after the last run the proposer priorities were as shown in first row with their sum being 0. After p2 is removed, at the end of next proposer selection run (penultimate row) the sum of priorities is -2 (minus the priority of the removed process).
The procedure could continue without modifications. However, after a sufficiently large number of modifications in validator set, the priority values would migrate towards maximum or minimum allowed values causing truncations due to overflow detection.
For this reason, the selection procedure adds another __new step__ that centers the current priority values such that the priority sum remains close to 0.
For this reason, the selection procedure adds another __new step__ that centers the current priority values such that the priority sum remains close to 0.
|Priority Run |-3 | -2 | -1 | 0 | 1 | 2 | 4 |Comment
|--------------- |--- | ---|--- |--- |--- |--- |---|--------
@ -132,6 +142,7 @@ For this reason, the selection procedure adds another __new step__ that centers
The modified selection algorithm is:
```md
def ProposerSelection (vset):
// center priorities around zero
@ -144,18 +155,23 @@ The modified selection algorithm is:
A(i) += VP(i)
prop = max(A)
A(prop) -= P
```
Observations:
- The sum of priorities is now close to 0. Due to integer division the sum is an integer in (-n, n), where n is the number of validators.
#### New Validator
### New Validator
When a new validator is added, same problem as the one described for removal appears, the sum of priorities in the new set is not zero. This is fixed with the centering step introduced above.
One other issue that needs to be addressed is the following. A validator V that has just been elected is moved to the end of the queue. If the validator set is large and/ or other validators have significantly higher power, V will have to wait many runs to be elected. If V removes and re-adds itself to the set, it would make a significant (albeit unfair) "jump" ahead in the queue.
One other issue that needs to be addressed is the following. A validator V that has just been elected is moved to the end of the queue. If the validator set is large and/ or other validators have significantly higher power, V will have to wait many runs to be elected. If V removes and re-adds itself to the set, it would make a significant (albeit unfair) "jump" ahead in the queue.
In order to prevent this, when a new validator is added, its initial priority is set to:
```md
A(V) = -1.125 * P
```
where P is the total voting power of the set including V.
@ -169,7 +185,9 @@ VP | 1 | 3 | 8
then p3 will start with proposer priority:
```md
A(p3) = -1.125 * (1 + 3 + 8) ~ -13
```
Note that since current computation uses integer division there is penalty loss when sum of the voting power is less than 8.
@ -183,7 +201,8 @@ In the next run, p3 will still be ahead in the queue, elected as proposer and mo
| | | | | | p3 | | | | p2| | p1|A(i)+=VP(i)
| | | | p1 | | p3 | | | | p2| | |A(p1)-=P
### Proposer Priority Range
## Proposer Priority Range
With the introduction of centering, some interesting cases occur. Low power validators that bind early in a set that includes high power validator(s) benefit from subsequent additions to the set. This is because these early validators run through more right shift operations during centering, operations that increase their priority.
As an example, consider the set where p2 is added after p1, with priority -1.125 * 80k = -90k. After the selection procedure runs once:
@ -198,83 +217,90 @@ Then execute the following steps:
1. Add a new validator p3:
Validator | p1 | p2 | p3
----------|-----|--- |----
VP | 80k | 10 | 10
Validator | p1 | p2 | p3
----------|-----|--- |----
VP | 80k | 10 | 10
2. Run selection once. The notation '..p'/'p..' means very small deviations compared to column priority.
|Priority Run | -90k..| -60k | -45k | -15k| 0 | 45k | 75k | 155k | Comment
|--------------|------ |----- |------- |---- |---|---- |----- |------- |---------
| last run | p3 | | p2 | | | p1 | | | __added p3__
| next run
| *right_shift*| | p3 | | p2 | | | p1 | | A(i) -= avg,avg=-30k
| | | ..p3| | ..p2| | | | p1 | A(i)+=VP(i)
| | | ..p3| | ..p2| | | p1.. | | A(p1)-=P, P=80k+20
|Priority Run | -90k..| -60k | -45k | -15k| 0 | 45k | 75k | 155k | Comment
|--------------|------ |----- |------- |---- |---|---- |----- |------- |---------
| last run | p3 | | p2 | | | p1 | | | __added p3__
| next run
| *right_shift*| | p3 | | p2 | | | p1 | | A(i) -= avg,avg=-30k
| | | ..p3| | ..p2| | | | p1 | A(i)+=VP(i)
| | | ..p3| | ..p2| | | p1.. | | A(p1)-=P, P=80k+20
3. Remove p1 and run selection once:
Validator | p3 | p2 | Comment
----------|----- |---- |--------
VP | 10 | 10 |
A |-60k |-15k |
A |-22.5k|22.5k| __run selection__
Validator | p3 | p2 | Comment
----------|----- |---- |--------
VP | 10 | 10 |
A |-60k |-15k |
A |-22.5k|22.5k| __run selection__
At this point, while the total voting power is 20, the distance between priorities is 45k. It will take 4500 runs for p3 to catch up with p2.
In order to prevent these types of scenarios, the selection algorithm performs scaling of priorities such that the difference between min and max values is smaller than two times the total voting power.
In order to prevent these types of scenarios, the selection algorithm performs scaling of priorities such that the difference between min and max values is smaller than two times the total voting power.
The modified selection algorithm is:
```md
def ProposerSelection (vset):
// scale the priority values
diff = max(A)-min(A)
threshold = 2 * P
if diff > threshold:
if diff > threshold:
scale = diff/threshold
for each validator i in vset:
A(i) = A(i)/scale
A(i) = A(i)/scale
// center priorities around zero
avg = sum(A(i) for i in vset)/len(vset)
for each validator i in vset:
A(i) -= avg
// compute priorities and elect proposer
for each validator i in vset:
A(i) += VP(i)
prop = max(A)
A(prop) -= P
```
Observations:
- With this modification, the maximum distance between priorites becomes 2 * P.
Note also that even during steady state the priority range may increase beyond 2 * P. The scaling introduced here helps to keep the range bounded.
Note also that even during steady state the priority range may increase beyond 2 * P. The scaling introduced here helps to keep the range bounded.
## Wrinkles
### Wrinkles
### Validator Power Overflow Conditions
#### Validator Power Overflow Conditions
The validator voting power is a positive number stored as an int64. When a validator is added the `1.125 * P` computation must not overflow. As a consequence the code handling validator updates (add and update) checks for overflow conditions making sure the total voting power is never larger than the largest int64 `MAX`, with the property that `1.125 * MAX` is still in the bounds of int64. Fatal error is return when overflow condition is detected.
#### Proposer Priority Overflow/ Underflow Handling
### Proposer Priority Overflow/ Underflow Handling
The proposer priority is stored as an int64. The selection algorithm performs additions and subtractions to these values and in the case of overflows and underflows it limits the values to:
```go
MaxInt64 = 1 << 63 - 1
MinInt64 = -1 << 63
```
## Requirement Fulfillment Claims
### Requirement Fulfillment Claims
__[R1]__
__[R1]__
The proposer algorithm is deterministic giving consistent results across executions with same transactions and validator set modifications.
The proposer algorithm is deterministic giving consistent results across executions with same transactions and validator set modifications.
[WIP - needs more detail]
__[R2]__
__[R2]__
Given a set of processes with the total voting power P, during a sequence of elections of length P, the number of times any process is selected as proposer is equal to its voting power. The sequence of the P proposers then repeats. If we consider the validator set:
Validator | p1| p2
Validator | p1| p2
----------|---|---
VP | 1 | 3
@ -286,6 +312,8 @@ Assigning priorities to each validator based on the voting power and updating th
Intuitively, a process v jumps ahead in the queue at most (max(A) - min(A))/VP(v) times until it reaches the head and is elected. The frequency is then:
```md
f(v) ~ VP(v)/(max(A)-min(A)) = 1/k * VP(v)/P
```
For current implementation, this means v should be proposer at least VP(v) times out of k * P runs, with scaling factor k=2.

+ 1
- 1
spec/reactors/mempool/config.md View File

@ -12,7 +12,7 @@ Environment: `TM_MEMPOOL_RECHECK=false`
Config:
```
```toml
[mempool]
recheck = false
```


+ 1
- 1
spec/reactors/mempool/functionality.md View File

@ -35,7 +35,7 @@ What guarantees does it need from the ABCI app?
The implementation within this library also implements a tx cache.
This is so that signatures don't have to be reverified if the tx has
already been seen before.
already been seen before.
However, we only store valid txs in the cache, not invalid ones.
This is because invalid txs could become good later.
Txs that are included in a block aren't removed from the cache,


+ 1
- 1
spec/reactors/mempool/reactor.md View File

@ -7,7 +7,7 @@ See [this issue](https://github.com/tendermint/tendermint/issues/1503)
Mempool maintains a cache of the last 10000 transactions to prevent
replaying old transactions (plus transactions coming from other
validators, who are continually exchanging transactions). Read [Replay
Protection](https://github.com/tendermint/tendermint/blob/master/docs/app-dev/app-development.md#replay-protection)
Protection](https://github.com/tendermint/tendermint/blob/8cdaa7f515a9d366bbc9f0aff2a263a1a6392ead/docs/app-dev/app-development.md#replay-protection)
for details.
Sending incorrectly encoded data or data exceeding `maxMsgSize` will result


+ 2
- 2
spec/reactors/pex/pex.md View File

@ -70,13 +70,13 @@ when calculating a bucket.
When placing a peer into a new bucket:
```
```md
hash(key + sourcegroup + int64(hash(key + group + sourcegroup)) % bucket_per_group) % num_new_buckets
```
When placing a peer into an old bucket:
```
```md
hash(key + group + int64(hash(key + addr)) % buckets_per_group) % num_old_buckets
```


+ 19
- 19
spec/reactors/state_sync/reactor.md View File

@ -5,14 +5,14 @@ and restoring state machine snapshots. For more information, see the [state sync
The state sync reactor has two main responsibilites:
* Serving state machine snapshots taken by the local ABCI application to new nodes joining the
* Serving state machine snapshots taken by the local ABCI application to new nodes joining the
network.
* Discovering existing snapshots and fetching snapshot chunks for an empty local application
being bootstrapped.
The state sync process for bootstrapping a new node is described in detail in the section linked
above. While technically part of the reactor (see `statesync/syncer.go` and related components),
above. While technically part of the reactor (see `statesync/syncer.go` and related components),
this document will only cover the P2P reactor component.
For details on the ABCI methods and data types, see the [ABCI documentation](../../abci/abci.md).
@ -26,16 +26,16 @@ available snapshots:
type snapshotsRequestMessage struct{}
```
The receiver will query the local ABCI application via `ListSnapshots`, and send a message
The receiver will query the local ABCI application via `ListSnapshots`, and send a message
containing snapshot metadata (limited to 4 MB) for each of the 10 most recent snapshots:
```go
type snapshotsResponseMessage struct {
Height uint64
Format uint32
Chunks uint32
Hash []byte
Metadata []byte
Height uint64
Format uint32
Chunks uint32
Hash []byte
Metadata []byte
}
```
@ -45,9 +45,9 @@ is accepted, the state syncer will request snapshot chunks from appropriate peer
```go
type chunkRequestMessage struct {
Height uint64
Format uint32
Index uint32
Height uint64
Format uint32
Index uint32
}
```
@ -56,16 +56,16 @@ and respond with it (limited to 16 MB):
```go
type chunkResponseMessage struct {
Height uint64
Format uint32
Index uint32
Chunk []byte
Missing bool
Height uint64
Format uint32
Index uint32
Chunk []byte
Missing bool
}
```
Here, `Missing` is used to signify that the chunk was not found on the peer, since an empty
chunk is a valid (although unlikely) response.
chunk is a valid (although unlikely) response.
The returned chunk is given to the ABCI application via `ApplySnapshotChunk` until the snapshot
is restored. If a chunk response is not returned within some time, it will be re-requested,
@ -73,5 +73,5 @@ possibly from a different peer.
The ABCI application is able to request peer bans and chunk refetching as part of the ABCI protocol.
If no state sync is in progress (i.e. during normal operation), any unsolicited response messages
are discarded.
If no state sync is in progress (i.e. during normal operation), any unsolicited response messages
are discarded.

Loading…
Cancel
Save