Browse Source

lint markdown docs using a stop-words and write-good linters (#2195)

* lint docs with write-good, stop-words

* remove package-lock.json

* update changelog

* fix wrong paragraph formatting

* fix some docs formatting

* fix docs format

* fix abci spec format
pull/2284/head
Peng Zhong 6 years ago
committed by Anton Kaliaev
parent
commit
20e35654c6
53 changed files with 3444 additions and 835 deletions
  1. +1
    -0
      CHANGELOG_PENDING.md
  2. +9
    -0
      docs/.textlintrc.json
  3. +10
    -20
      docs/app-dev/ecosystem.json
  4. +1
    -2
      docs/architecture/adr-002-event-subscription.md
  5. +3
    -3
      docs/architecture/adr-004-historical-validators.md
  6. +4
    -5
      docs/architecture/adr-005-consensus-params.md
  7. +14
    -23
      docs/architecture/adr-006-trust-metric.md
  8. +3
    -0
      docs/architecture/adr-007-trust-metric-usage.md
  9. +4
    -4
      docs/architecture/adr-008-priv-validator.md
  10. +4
    -6
      docs/architecture/adr-009-ABCI-design.md
  11. +0
    -3
      docs/architecture/adr-010-crypto-changes.md
  12. +29
    -29
      docs/architecture/adr-011-monitoring.md
  13. +6
    -6
      docs/architecture/adr-012-ABCI-propose-tx.md
  14. +17
    -17
      docs/architecture/adr-012-peer-transport.md
  15. +23
    -17
      docs/architecture/adr-013-symmetric-crypto.md
  16. +9
    -7
      docs/architecture/adr-014-secp-malleability.md
  17. +10
    -5
      docs/architecture/adr-015-crypto-encoding.md
  18. +14
    -19
      docs/architecture/adr-016-protocol-versions.md
  19. +11
    -12
      docs/architecture/adr-017-chain-versions.md
  20. +29
    -19
      docs/architecture/adr-019-multisigs.md
  21. +0
    -1
      docs/architecture/adr-template.md
  22. +5
    -2
      docs/package.json
  23. +3
    -3
      docs/spec/README.md
  24. +7
    -9
      docs/spec/blockchain/blockchain.md
  25. +30
    -33
      docs/spec/blockchain/encoding.md
  26. +0
    -1
      docs/spec/blockchain/state.md
  27. +41
    -44
      docs/spec/consensus/bft-time.md
  28. +87
    -87
      docs/spec/consensus/consensus.md
  29. +34
    -35
      docs/spec/consensus/light-client.md
  30. +6
    -5
      docs/spec/p2p/connection.md
  31. +2
    -1
      docs/spec/p2p/node.md
  32. +8
    -9
      docs/spec/p2p/peer.md
  33. +30
    -30
      docs/spec/reactors/block_sync/impl.md
  34. +145
    -147
      docs/spec/reactors/block_sync/reactor.md
  35. +105
    -106
      docs/spec/reactors/consensus/consensus-reactor.md
  36. +1
    -1
      docs/spec/reactors/consensus/consensus.md
  37. +12
    -12
      docs/spec/reactors/consensus/proposer-selection.md
  38. +4
    -4
      docs/spec/reactors/mempool/concurrency.md
  39. +1
    -1
      docs/spec/reactors/mempool/config.md
  40. +7
    -8
      docs/spec/reactors/mempool/functionality.md
  41. +14
    -14
      docs/spec/reactors/mempool/messages.md
  42. +1
    -1
      docs/spec/reactors/pex/pex.md
  43. +26
    -29
      docs/spec/software/abci.md
  44. +6
    -0
      docs/stop-words.txt
  45. +7
    -7
      docs/tendermint-core/block-structure.md
  46. +10
    -10
      docs/tendermint-core/light-client-protocol.md
  47. +13
    -12
      docs/tendermint-core/running-in-production.md
  48. +6
    -6
      docs/tendermint-core/secure-p2p.md
  49. +14
    -14
      docs/tendermint-core/using-tendermint.md
  50. +3
    -3
      docs/tendermint-core/validators.md
  51. +2
    -2
      docs/tools/benchmarking.md
  52. +2
    -1
      docs/tools/monitoring.md
  53. +2611
    -0
      docs/yarn.lock

+ 1
- 0
CHANGELOG_PENDING.md View File

@ -23,6 +23,7 @@ FEATURES:
- [types] allow genesis file to have 0 validators ([#2015](https://github.com/tendermint/tendermint/issues/2015))
IMPROVEMENTS:
- [docs] Lint documentation with `write-good` and `stop-words`.
- [scripts] Added json2wal tool, which is supposed to help our users restore
corrupted WAL files and compose test WAL files (@bradyjoestar)


+ 9
- 0
docs/.textlintrc.json View File

@ -0,0 +1,9 @@
{
"rules": {
"stop-words": {
"severity": "warning",
"defaultWords": false,
"words": "stop-words.txt"
}
}
}

+ 10
- 20
docs/app-dev/ecosystem.json View File

@ -5,24 +5,21 @@
"url": "https://github.com/cosmos/cosmos-sdk",
"language": "Go",
"author": "Cosmos",
"description":
"A prototypical account based crypto currency state machine supporting plugins"
"description": "A prototypical account based crypto currency state machine supporting plugins"
},
{
"name": "cb-ledger",
"url": "https://github.com/block-finance/cpp-abci",
"language": "C++",
"author": "Block Finance",
"description":
"Custodian Bank Ledger, integrating central banking with the blockchains of tomorrow"
"description": "Custodian Bank Ledger, integrating central banking with the blockchains of tomorrow"
},
{
"name": "Clearchain",
"url": "https://github.com/tendermint/clearchain",
"language": "Go",
"author": "FXCLR",
"description":
"Application to manage a distributed ledger for money transfers that support multi-currency accounts"
"description": "Application to manage a distributed ledger for money transfers that support multi-currency accounts"
},
{
"name": "Ethermint",
@ -43,8 +40,7 @@
"url": "https://github.com/hyperledger/burrow",
"language": "Go",
"author": "Monax Industries",
"description":
"Ethereum Virtual Machine augmented with native permissioning scheme and global key-value store"
"description": "Ethereum Virtual Machine augmented with native permissioning scheme and global key-value store"
},
{
"name": "Merkle AVL Tree",
@ -72,8 +68,7 @@
"url": "https://github.com/trusch/passchain",
"language": "Go",
"author": "trusch",
"description":
"Tool to securely store and share passwords, tokens and other short secrets"
"description": "Tool to securely store and share passwords, tokens and other short secrets"
},
{
"name": "Passwerk",
@ -87,8 +82,7 @@
"url": "https://github.com/davebryson/py-tendermint",
"language": "Python",
"author": "Dave Bryson",
"description":
"A Python microframework for building blockchain applications with Tendermint"
"description": "A Python microframework for building blockchain applications with Tendermint"
},
{
"name": "Stratumn SDK",
@ -102,16 +96,14 @@
"url": "https://github.com/keppel/lotion",
"language": "Javascript",
"author": "Judd Keppel",
"description":
"A Javascript microframework for building blockchain applications with Tendermint"
"description": "A Javascript microframework for building blockchain applications with Tendermint"
},
{
"name": "Tendermint Blockchain Chat App",
"url": "https://github.com/SaifRehman/tendermint-chat-app/",
"language": "Javascript",
"author": "Saif Rehman",
"description":
"This is a minimal chat application based on Tendermint using Lotion.js in 30 lines of code!. It also includes web/mobile application built using Ionic 3."
"description": "This is a minimal chat application based on Tendermint using Lotion.js in 30 lines of code!. It also includes web/mobile application built using Ionic 3."
},
{
"name": "BigchainDB",
@ -184,16 +176,14 @@
"url": "https://github.com/tendermint/tools",
"technology": "Docker and Kubernetes",
"author": "Tendermint",
"description":
"Deploy a Tendermint test network using Google's kubernetes"
"description": "Deploy a Tendermint test network using Google's kubernetes"
},
{
"name": "terraforce",
"url": "https://github.com/tendermint/tools",
"technology": "Terraform",
"author": "Tendermint",
"description":
"Terraform + our custom terraforce tool; deploy a production Tendermint network with load balancing over multiple AWS availability zones"
"description": "Terraform + our custom terraforce tool; deploy a production Tendermint network with load balancing over multiple AWS availability zones"
},
{
"name": "ansible-tendermint",


+ 1
- 2
docs/architecture/adr-002-event-subscription.md View File

@ -7,8 +7,7 @@ a subset of transactions** (rather than all of them) using `/subscribe?event=X`.
example, I want to subscribe for all transactions associated with a particular
account. Same for fetching. The user may want to **fetch transactions based on
some filter** (rather than fetching all the blocks). For example, I want to get
all transactions for a particular account in the last two weeks (`tx's block
time >= '2017-06-05'`).
all transactions for a particular account in the last two weeks (`tx's block time >= '2017-06-05'`).
Now you can't even subscribe to "all txs" in Tendermint.


+ 3
- 3
docs/architecture/adr-004-historical-validators.md View File

@ -3,11 +3,11 @@
## Context
Right now, we can query the present validator set, but there is no history.
If you were offline for a long time, there is no way to reconstruct past validators. This is needed for the light client and we agreed needs enhancement of the API.
If you were offline for a long time, there is no way to reconstruct past validators. This is needed for the light client and we agreed needs enhancement of the API.
## Decision
For every block, store a new structure that contains either the latest validator set,
For every block, store a new structure that contains either the latest validator set,
or the height of the last block for which the validator set changed. Note this is not
the height of the block which returned the validator set change itself, but the next block,
ie. the first block it comes into effect for.
@ -19,7 +19,7 @@ are updated frequently - for instance by only saving the diffs, rather than the
An alternative approach suggested keeping the validator set, or diffs of it, in a merkle IAVL tree.
While it might afford cheaper proofs that a validator set has not changed, it would be more complex,
and likely less efficient.
and likely less efficient.
## Status


+ 4
- 5
docs/architecture/adr-005-consensus-params.md View File

@ -7,7 +7,7 @@ Since they may be need to be different in different networks, and potentially to
networks, we seek to initialize them in a genesis file, and expose them through the ABCI.
While we have some specific parameters now, like maximum block and transaction size, we expect to have more in the future,
such as a period over which evidence is valid, or the frequency of checkpoints.
such as a period over which evidence is valid, or the frequency of checkpoints.
## Decision
@ -45,7 +45,7 @@ type BlockGossip struct {
The `ConsensusParams` can evolve over time by adding new structs that cover different aspects of the consensus rules.
The `BlockPartSizeBytes` and the `BlockSize.MaxBytes` are enforced to be greater than 0.
The `BlockPartSizeBytes` and the `BlockSize.MaxBytes` are enforced to be greater than 0.
The former because we need a part size, the latter so that we always have at least some sanity check over the size of blocks.
### ABCI
@ -53,14 +53,14 @@ The former because we need a part size, the latter so that we always have at lea
#### InitChain
InitChain currently takes the initial validator set. It should be extended to also take parts of the ConsensusParams.
There is some case to be made for it to take the entire Genesis, except there may be things in the genesis,
There is some case to be made for it to take the entire Genesis, except there may be things in the genesis,
like the BlockPartSize, that the app shouldn't really know about.
#### EndBlock
The EndBlock response includes a `ConsensusParams`, which includes BlockSize and TxSize, but not BlockGossip.
Other param struct can be added to `ConsensusParams` in the future.
The `0` value is used to denote no change.
The `0` value is used to denote no change.
Any other value will update that parameter in the `State.ConsensusParams`, to be applied for the next block.
Tendermint should have hard-coded upper limits as sanity checks.
@ -83,4 +83,3 @@ Proposed.
### Neutral
- The TxSize, which checks validity, may be in conflict with the config's `max_block_size_tx`, which determines proposal sizes

+ 14
- 23
docs/architecture/adr-006-trust-metric.md View File

@ -8,13 +8,13 @@ The proposed trust metric will allow Tendermint to maintain local trust rankings
The Tendermint Core project developers would like to improve Tendermint security and reliability by keeping track of the level of trustworthiness peers have demonstrated within the peer-to-peer network. This way, undesirable outcomes from peers will not immediately result in them being dropped from the network (potentially causing drastic changes to take place). Instead, peers behavior can be monitored with appropriate metrics and be removed from the network once Tendermint Core is certain the peer is a threat. For example, when the PEXReactor makes a request for peers network addresses from a already known peer, and the returned network addresses are unreachable, this untrustworthy behavior should be tracked. Returning a few bad network addresses probably shouldn’t cause a peer to be dropped, while excessive amounts of this behavior does qualify the peer being dropped.
Trust metrics can be circumvented by malicious nodes through the use of strategic oscillation techniques, which adapts the malicious node’s behavior pattern in order to maximize its goals. For instance, if the malicious node learns that the time interval of the Tendermint trust metric is *X* hours, then it could wait *X* hours in-between malicious activities. We could try to combat this issue by increasing the interval length, yet this will make the system less adaptive to recent events.
Trust metrics can be circumvented by malicious nodes through the use of strategic oscillation techniques, which adapts the malicious node’s behavior pattern in order to maximize its goals. For instance, if the malicious node learns that the time interval of the Tendermint trust metric is _X_ hours, then it could wait _X_ hours in-between malicious activities. We could try to combat this issue by increasing the interval length, yet this will make the system less adaptive to recent events.
Instead, having shorter intervals, but keeping a history of interval values, will give our metric the flexibility needed in order to keep the network stable, while also making it resilient against a strategic malicious node in the Tendermint peer-to-peer network. Also, the metric can access trust data over a rather long period of time while not greatly increasing its history size by aggregating older history values over a larger number of intervals, and at the same time, maintain great precision for the recent intervals. This approach is referred to as fading memories, and closely resembles the way human beings remember their experiences. The trade-off to using history data is that the interval values should be preserved in-between executions of the node.
### References
S. Mudhakar, L. Xiong, and L. Liu, “TrustGuard: Countering Vulnerabilities in Reputation Management for Decentralized Overlay Networks,” in *Proceedings of the 14th international conference on World Wide Web, pp. 422-431*, May 2005.
S. Mudhakar, L. Xiong, and L. Liu, “TrustGuard: Countering Vulnerabilities in Reputation Management for Decentralized Overlay Networks,” in _Proceedings of the 14th international conference on World Wide Web, pp. 422-431_, May 2005.
## Decision
@ -26,25 +26,23 @@ The three subsections below will cover the process being considered for calculat
The proposed trust metric will count good and bad events relevant to the object, and calculate the percent of counters that are good over an interval with a predefined duration. This is the procedure that will continue for the life of the trust metric. When the trust metric is queried for the current **trust value**, a resilient equation will be utilized to perform the calculation.
The equation being proposed resembles a Proportional-Integral-Derivative (PID) controller used in control systems. The proportional component allows us to be sensitive to the value of the most recent interval, while the integral component allows us to incorporate trust values stored in the history data, and the derivative component allows us to give weight to sudden changes in the behavior of a peer. We compute the trust value of a peer in interval i based on its current trust ranking, its trust rating history prior to interval *i* (over the past *maxH* number of intervals) and its trust ranking fluctuation. We will break up the equation into the three components.
The equation being proposed resembles a Proportional-Integral-Derivative (PID) controller used in control systems. The proportional component allows us to be sensitive to the value of the most recent interval, while the integral component allows us to incorporate trust values stored in the history data, and the derivative component allows us to give weight to sudden changes in the behavior of a peer. We compute the trust value of a peer in interval i based on its current trust ranking, its trust rating history prior to interval _i_ (over the past _maxH_ number of intervals) and its trust ranking fluctuation. We will break up the equation into the three components.
```math
(1) Proportional Value = a * R[i]
```
where *R*[*i*] denotes the raw trust value at time interval *i* (where *i* == 0 being current time) and *a* is the weight applied to the contribution of the current reports. The next component of our equation uses a weighted sum over the last *maxH* intervals to calculate the history value for time *i*:
where _R_[*i*] denotes the raw trust value at time interval _i_ (where _i_ == 0 being current time) and _a_ is the weight applied to the contribution of the current reports. The next component of our equation uses a weighted sum over the last _maxH_ intervals to calculate the history value for time _i_:
`H[i] = ` ![formula1](img/formula1.png "Weighted Sum Formula")
`H[i] =` ![formula1](img/formula1.png "Weighted Sum Formula")
The weights can be chosen either optimistically or pessimistically. An optimistic weight creates larger weights for newer history data values, while the the pessimistic weight creates larger weights for time intervals with lower scores. The default weights used during the calculation of the history value are optimistic and calculated as *Wk* = 0.8^*k*, for time interval *k*. With the history value available, we can now finish calculating the integral value:
The weights can be chosen either optimistically or pessimistically. An optimistic weight creates larger weights for newer history data values, while the the pessimistic weight creates larger weights for time intervals with lower scores. The default weights used during the calculation of the history value are optimistic and calculated as _Wk_ = 0.8^_k_, for time interval _k_. With the history value available, we can now finish calculating the integral value:
```math
(2) Integral Value = b * H[i]
```
Where *H*[*i*] denotes the history value at time interval *i* and *b* is the weight applied to the contribution of past performance for the object being measured. The derivative component will be calculated as follows:
Where _H_[*i*] denotes the history value at time interval _i_ and _b_ is the weight applied to the contribution of past performance for the object being measured. The derivative component will be calculated as follows:
```math
D[i] = R[i] – H[i]
@ -52,25 +50,25 @@ D[i] = R[i] – H[i]
(3) Derivative Value = c(D[i]) * D[i]
```
Where the value of *c* is selected based on the *D*[*i*] value relative to zero. The default selection process makes *c* equal to 0 unless *D*[*i*] is a negative value, in which case c is equal to 1. The result is that the maximum penalty is applied when current behavior is lower than previously experienced behavior. If the current behavior is better than the previously experienced behavior, then the Derivative Value has no impact on the trust value. With the three components brought together, our trust value equation is calculated as follows:
Where the value of _c_ is selected based on the _D_[*i*] value relative to zero. The default selection process makes _c_ equal to 0 unless _D_[*i*] is a negative value, in which case c is equal to 1. The result is that the maximum penalty is applied when current behavior is lower than previously experienced behavior. If the current behavior is better than the previously experienced behavior, then the Derivative Value has no impact on the trust value. With the three components brought together, our trust value equation is calculated as follows:
```math
TrustValue[i] = a * R[i] + b * H[i] + c(D[i]) * D[i]
```
As a performance optimization that will keep the amount of raw interval data being saved to a reasonable size of *m*, while allowing us to represent 2^*m* - 1 history intervals, we can employ the fading memories technique that will trade space and time complexity for the precision of the history data values by summarizing larger quantities of less recent values. While our equation above attempts to access up to *maxH* (which can be 2^*m* - 1), we will map those requests down to *m* values using equation 4 below:
As a performance optimization that will keep the amount of raw interval data being saved to a reasonable size of _m_, while allowing us to represent 2^_m_ - 1 history intervals, we can employ the fading memories technique that will trade space and time complexity for the precision of the history data values by summarizing larger quantities of less recent values. While our equation above attempts to access up to _maxH_ (which can be 2^_m_ - 1), we will map those requests down to _m_ values using equation 4 below:
```math
(4) j = index, where index > 0
```
Where *j* is one of *(0, 1, 2, … , m – 1)* indices used to access history interval data. Now we can access the raw intervals using the following calculations:
Where _j_ is one of _(0, 1, 2, … , m – 1)_ indices used to access history interval data. Now we can access the raw intervals using the following calculations:
```math
R[0] = raw data for current time interval
```
`R[j] = ` ![formula2](img/formula2.png "Fading Memories Formula")
`R[j] =` ![formula2](img/formula2.png "Fading Memories Formula")
### Trust Metric Store
@ -84,9 +82,7 @@ When the node is shutting down, the trust metric store will save history data fo
Each trust metric allows for the recording of positive/negative events, querying the current trust value/score, and the stopping/pausing of tracking over time intervals. This can be seen below:
```go
// TrustMetric - keeps track of peer reliability
type TrustMetric struct {
// Private elements.
@ -123,13 +119,11 @@ tm.BadEvents(1)
score := tm.TrustScore()
tm.Stop()
```
Some of the trust metric parameters can be configured. The weight values should probably be left alone in more cases, yet the time durations for the tracking window and individual time interval should be considered.
```go
// TrustMetricConfig - Configures the weight functions and time intervals for the metric
type TrustMetricConfig struct {
// Determines the percentage given to current behavior
@ -165,23 +159,21 @@ config := TrustMetricConfig{
tm := NewMetricWithConfig(config)
tm.BadEvents(10)
tm.Pause()
tm.Pause()
tm.GoodEvents(1) // becomes active again
```
A trust metric store should be created with a DB that has persistent storage so it can save history data across node executions. All trust metrics instantiated by the store will be created with the provided TrustMetricConfig configuration.
A trust metric store should be created with a DB that has persistent storage so it can save history data across node executions. All trust metrics instantiated by the store will be created with the provided TrustMetricConfig configuration.
When you attempt to fetch the trust metric for a peer, and an entry does not exist in the trust metric store, a new metric is automatically created and the entry made within the store.
In additional to the fetching method, GetPeerTrustMetric, the trust metric store provides a method to call when a peer has disconnected from the node. This is so the metric can be paused (history data will not be saved) for periods of time when the node is not having direct experiences with the peer.
```go
// TrustMetricStore - Manages all trust metrics for peers
type TrustMetricStore struct {
cmn.BaseService
// Private elements
}
@ -214,7 +206,6 @@ tm := tms.GetPeerTrustMetric(key)
tm.BadEvents(1)
tms.PeerDisconnected(key)
```
## Status


+ 3
- 0
docs/architecture/adr-007-trust-metric-usage.md View File

@ -17,11 +17,13 @@ For example, when the PEXReactor makes a request for peers network addresses fro
The trust metric implementation allows a developer to obtain a peer's trust metric from a trust metric store, and track good and bad events relevant to a peer's behavior, and at any time, the peer's metric can be queried for a current trust value. The current trust value is calculated with a formula that utilizes current behavior, previous behavior, and change between the two. Current behavior is calculated as the percentage of good behavior within a time interval. The time interval is short; probably set between 30 seconds and 5 minutes. On the other hand, the historic data can estimate a peer's behavior over days worth of tracking. At the end of a time interval, the current behavior becomes part of the historic data, and a new time interval begins with the good and bad counters reset to zero.
These are some important things to keep in mind regarding how the trust metrics handle time intervals and scoring:
- Each new time interval begins with a perfect score
- Bad events quickly bring the score down and good events cause the score to slowly rise
- When the time interval is over, the percentage of good events becomes historic data.
Some useful information about the inner workings of the trust metric:
- When a trust metric is first instantiated, a timer (ticker) periodically fires in order to handle transitions between trust metric time intervals
- If a peer is disconnected from a node, the timer should be paused, since the node is no longer connected to that peer
- The ability to pause the metric is supported with the store **PeerDisconnected** method and the metric **Pause** method
@ -76,6 +78,7 @@ Peer quality is tracked in the connection and across the reactors by storing the
thread safe Data store.
Peer behaviour is then defined as one of the following:
- Fatal - something outright malicious that causes us to disconnect the peer and ban it from the address book for some amount of time
- Bad - Any kind of timeout, messages that don't unmarshal, fail other validity checks, or messages we didn't ask for or aren't expecting (usually worth one bad event)
- Neutral - Unknown channels/message types/version upgrades (no good or bad events recorded)


+ 4
- 4
docs/architecture/adr-008-priv-validator.md View File

@ -11,18 +11,18 @@ implementations:
The SocketPV address can be provided via flags at the command line - doing so
will cause Tendermint to ignore any "priv_validator.json" file and to listen on
the given address for incoming connections from an external priv_validator
process. It will halt any operation until at least one external process
process. It will halt any operation until at least one external process
succesfully connected.
The external priv_validator process will dial the address to connect to
Tendermint, and then Tendermint will send requests on the ensuing connection to
sign votes and proposals. Thus the external process initiates the connection,
but the Tendermint process makes all requests. In a later stage we're going to
sign votes and proposals. Thus the external process initiates the connection,
but the Tendermint process makes all requests. In a later stage we're going to
support multiple validators for fault tolerance. To prevent double signing they
need to be synced, which is deferred to an external solution (see #1185).
In addition, Tendermint will provide implementations that can be run in that
external process. These include:
external process. These include:
- FilePV will encrypt the private key, and the user must enter password to
decrypt key when process is started.


+ 4
- 6
docs/architecture/adr-009-ABCI-design.md View File

@ -8,7 +8,7 @@
## Context
The ABCI was first introduced in late 2015. It's purpose is to be:
The ABCI was first introduced in late 2015. It's purpose is to be:
- a generic interface between state machines and their replication engines
- agnostic to the language the state machine is written in
@ -66,8 +66,8 @@ possible.
### Validators
To change the validator set, applications can return a list of validator updates
with ResponseEndBlock. In these updates, the public key *must* be included,
because Tendermint requires the public key to verify validator signatures. This
with ResponseEndBlock. In these updates, the public key _must_ be included,
because Tendermint requires the public key to verify validator signatures. This
means ABCI developers have to work with PubKeys. That said, it would also be
convenient to work with address information, and for it to be simple to do so.
@ -80,7 +80,7 @@ in commits.
### InitChain
Tendermint passes in a list of validators here, and nothing else. It would
Tendermint passes in a list of validators here, and nothing else. It would
benefit the application to be able to control the initial validator set. For
instance the genesis file could include application-based information about the
initial validator set that the application could process to determine the
@ -120,7 +120,6 @@ v1 will:
That said, an Amino v2 will be worked on to improve the performance of the
format and its useability in cryptographic applications.
### PubKey
Encoding schemes infect software. As a generic middleware, ABCI aims to have
@ -143,7 +142,6 @@ where `type` can be:
- "ed225519", with `data = <raw 32-byte pubkey>`
- "secp256k1", with `data = <33-byte OpenSSL compressed pubkey>`
As we want to retain flexibility here, and since ideally, PubKey would be an
interface type, we do not use `enum` or `oneof`.


+ 0
- 3
docs/architecture/adr-010-crypto-changes.md View File

@ -66,13 +66,10 @@ Make the following changes:
- More modern and standard cryptographic functions with wider adoption and hardware acceleration
### Negative
- Exact authenticated encryption construction isn't already provided in a well-used library
### Neutral
## References

+ 29
- 29
docs/architecture/adr-011-monitoring.md View File

@ -15,11 +15,11 @@ https://github.com/tendermint/tendermint/issues/986.
A few solutions were considered:
1. [Prometheus](https://prometheus.io)
a) Prometheus API
b) [go-kit metrics package](https://github.com/go-kit/kit/tree/master/metrics) as an interface plus Prometheus
c) [telegraf](https://github.com/influxdata/telegraf)
d) new service, which will listen to events emitted by pubsub and report metrics
5. [OpenCensus](https://opencensus.io/go/index.html)
a) Prometheus API
b) [go-kit metrics package](https://github.com/go-kit/kit/tree/master/metrics) as an interface plus Prometheus
c) [telegraf](https://github.com/influxdata/telegraf)
d) new service, which will listen to events emitted by pubsub and report metrics
2. [OpenCensus](https://opencensus.io/go/index.html)
### 1. Prometheus
@ -70,30 +70,30 @@ will need to write interfaces ourselves.
### List of metrics
| | Name | Type | Description |
| - | --------------------------------------- | ------- | ----------------------------------------------------------------------------- |
| A | consensus_height | Gauge | |
| A | consensus_validators | Gauge | Number of validators who signed |
| A | consensus_validators_power | Gauge | Total voting power of all validators |
| A | consensus_missing_validators | Gauge | Number of validators who did not sign |
| A | consensus_missing_validators_power | Gauge | Total voting power of the missing validators |
| A | consensus_byzantine_validators | Gauge | Number of validators who tried to double sign |
| A | consensus_byzantine_validators_power | Gauge | Total voting power of the byzantine validators |
| A | consensus_block_interval | Timing | Time between this and last block (Block.Header.Time) |
| | consensus_block_time | Timing | Time to create a block (from creating a proposal to commit) |
| | consensus_time_between_blocks | Timing | Time between committing last block and (receiving proposal creating proposal) |
| A | consensus_rounds | Gauge | Number of rounds |
| | consensus_prevotes | Gauge | |
| | consensus_precommits | Gauge | |
| | consensus_prevotes_total_power | Gauge | |
| | consensus_precommits_total_power | Gauge | |
| A | consensus_num_txs | Gauge | |
| A | mempool_size | Gauge | |
| A | consensus_total_txs | Gauge | |
| A | consensus_block_size | Gauge | In bytes |
| A | p2p_peers | Gauge | Number of peers node's connected to |
`A` - will be implemented in the fist place.
| | Name | Type | Description |
| --- | ------------------------------------ | ------ | ----------------------------------------------------------------------------- |
| A | consensus_height | Gauge | |
| A | consensus_validators | Gauge | Number of validators who signed |
| A | consensus_validators_power | Gauge | Total voting power of all validators |
| A | consensus_missing_validators | Gauge | Number of validators who did not sign |
| A | consensus_missing_validators_power | Gauge | Total voting power of the missing validators |
| A | consensus_byzantine_validators | Gauge | Number of validators who tried to double sign |
| A | consensus_byzantine_validators_power | Gauge | Total voting power of the byzantine validators |
| A | consensus_block_interval | Timing | Time between this and last block (Block.Header.Time) |
| | consensus_block_time | Timing | Time to create a block (from creating a proposal to commit) |
| | consensus_time_between_blocks | Timing | Time between committing last block and (receiving proposal creating proposal) |
| A | consensus_rounds | Gauge | Number of rounds |
| | consensus_prevotes | Gauge | |
| | consensus_precommits | Gauge | |
| | consensus_prevotes_total_power | Gauge | |
| | consensus_precommits_total_power | Gauge | |
| A | consensus_num_txs | Gauge | |
| A | mempool_size | Gauge | |
| A | consensus_total_txs | Gauge | |
| A | consensus_block_size | Gauge | In bytes |
| A | p2p_peers | Gauge | Number of peers node's connected to |
`A` - will be implemented in the fist place.
**Proposed solution**


+ 6
- 6
docs/architecture/adr-012-ABCI-propose-tx.md View File

@ -33,7 +33,7 @@ Due to the requirements of [Minimal Viable Plasma (MVP)](https://ethresear.ch/t/
special treatment.
2. Other "internal" transactions on the child chain, which may be initiated
unilaterally. The most basic example of is a coinbase transaction
unilaterally. The most basic example of is a coinbase transaction
implementing validator node incentives, but may also be app-specific. In
these cases, it may be favourable for such transactions to
be ordered in a specific manner, e.g., coinbase transactions will always be
@ -86,14 +86,14 @@ current proposer is passed to `BeginBlock`.
It is much easier to relay these transactions directly to the Root
Chain smart contract and/or maintain a "compressed" auxiliary chain comprised
of Plasma-friendly blocks that 100% reflect the canonical (Tendermint)
blockchain. Unfortunately, this approach not idiomatic (i.e., utilises the
blockchain. Unfortunately, this approach not idiomatic (i.e., utilises the
Tendermint consensus engine in unintended ways). Additionally, it does not
allow the application developer to:
- Control the _ordering_ of transactions in the proposed block (e.g., index 0,
or 0 to `n` for coinbase transactions)
or 0 to `n` for coinbase transactions)
- Control the _number_ of transactions in the block (e.g., when a `deposit`
block is required)
block is required)
Since determinism is of utmost importance in blockchain engineering, this approach,
while more viable, should also not be considered as fit for production.
@ -163,9 +163,9 @@ Pending
- Tendermint ABCI apps will be able to function as minimally viable Plasma chains.
- It will thereby become possible to add an extension to `cosmos-sdk` to enable
ABCI apps to support both IBC and Plasma, maximising interop.
ABCI apps to support both IBC and Plasma, maximising interop.
- ABCI apps will have great control and flexibility in managing blockchain state,
without having to resort to non-deterministic hacks and/or unsafe workarounds
without having to resort to non-deterministic hacks and/or unsafe workarounds
### Negative


+ 17
- 17
docs/architecture/adr-012-peer-transport.md View File

@ -9,8 +9,9 @@ handling. An artifact is the dependency of the Switch on
`[config.P2PConfig`](https://github.com/tendermint/tendermint/blob/05a76fb517f50da27b4bfcdc7b4cf185fc61eff6/config/config.go#L272-L339).
Addresses:
* [#2046](https://github.com/tendermint/tendermint/issues/2046)
* [#2047](https://github.com/tendermint/tendermint/issues/2047)
- [#2046](https://github.com/tendermint/tendermint/issues/2046)
- [#2047](https://github.com/tendermint/tendermint/issues/2047)
First iteraton in [#2067](https://github.com/tendermint/tendermint/issues/2067)
@ -29,15 +30,14 @@ transport implementation is responsible to filter establishing peers specific
to its domain, for the default multiplexed implementation the following will
apply:
* connections from our own node
* handshake fails
* upgrade to secret connection fails
* prevent duplicate ip
* prevent duplicate id
* nodeinfo incompatibility
- connections from our own node
- handshake fails
- upgrade to secret connection fails
- prevent duplicate ip
- prevent duplicate id
- nodeinfo incompatibility
``` go
```go
// PeerTransport proxies incoming and outgoing peer connections.
type PeerTransport interface {
// Accept returns a newly connected Peer.
@ -75,7 +75,7 @@ func NewMTransport(
nodeAddr NetAddress,
nodeInfo NodeInfo,
nodeKey NodeKey,
) *multiplexTransport
) *multiplexTransport
```
### Switch
@ -84,7 +84,7 @@ From now the Switch will depend on a fully setup `PeerTransport` to
retrieve/reach out to its peers. As the more low-level concerns are pushed to
the transport, we can omit passing the `config.P2PConfig` to the Switch.
``` go
```go
func NewSwitch(transport PeerTransport, opts ...SwitchOption) *Switch
```
@ -96,17 +96,17 @@ In Review.
### Positive
* free Switch from transport concerns - simpler implementation
* pluggable transport implementation - simpler test setup
* remove Switch dependency on P2PConfig - easier to test
- free Switch from transport concerns - simpler implementation
- pluggable transport implementation - simpler test setup
- remove Switch dependency on P2PConfig - easier to test
### Negative
* more setup for tests which depend on Switches
- more setup for tests which depend on Switches
### Neutral
* multiplexed will be the default implementation
- multiplexed will be the default implementation
[0] These guards could be potentially extended to be pluggable much like
middlewares to express different concerns required by differentally configured


+ 23
- 17
docs/architecture/adr-013-symmetric-crypto.md View File

@ -14,22 +14,23 @@ to easily swap these out.
### How do we encrypt with AEAD's
AEAD's typically require a nonce in addition to the key.
AEAD's typically require a nonce in addition to the key.
For the purposes we require symmetric cryptography for,
we need encryption to be stateless.
Because of this we use random nonces.
Because of this we use random nonces.
(Thus the AEAD must support random nonces)
We currently construct a random nonce, and encrypt the data with it.
We currently construct a random nonce, and encrypt the data with it.
The returned value is `nonce || encrypted data`.
The limitation of this is that does not provide a way to identify
which algorithm was used in encryption.
Consequently decryption with multiple algoritms is sub-optimal.
Consequently decryption with multiple algoritms is sub-optimal.
(You have to try them all)
## Decision
We should create the following two methods in a new `crypto/encoding/symmetric` package:
We should create the following two methods in a new `crypto/encoding/symmetric` package:
```golang
func Encrypt(aead cipher.AEAD, plaintext []byte) (ciphertext []byte, err error)
func Decrypt(key []byte, ciphertext []byte) (plaintext []byte, err error)
@ -37,18 +38,19 @@ func Register(aead cipher.AEAD, algo_name string, NewAead func(key []byte) (ciph
```
This allows you to specify the algorithm in encryption, but not have to specify
it in decryption.
it in decryption.
This is intended for ease of use in downstream applications, in addition to people
looking at the file directly.
One downside is that for the encrypt function you must have already initialized an AEAD,
but I don't really see this as an issue.
but I don't really see this as an issue.
If there is no error in encryption, Encrypt will return `algo_name || nonce || aead_ciphertext`.
If there is no error in encryption, Encrypt will return `algo_name || nonce || aead_ciphertext`.
`algo_name` should be length prefixed, using standard varuint encoding.
This will be binary data, but thats not a problem considering the nonce and ciphertext are also binary.
This solution requires a mapping from aead type to name.
We can achieve this via reflection.
This solution requires a mapping from aead type to name.
We can achieve this via reflection.
```golang
func getType(myvar interface{}) string {
if t := reflect.TypeOf(myvar); t.Kind() == reflect.Ptr {
@ -58,7 +60,8 @@ func getType(myvar interface{}) string {
}
}
```
Then we maintain a map from the name returned from `getType(aead)` to `algo_name`.
Then we maintain a map from the name returned from `getType(aead)` to `algo_name`.
In decryption, we read the `algo_name`, and then instantiate a new AEAD with the key.
Then we call the AEAD's decrypt method on the provided nonce/ciphertext.
@ -81,13 +84,16 @@ Proposed.
## Consequences
### Positive
* Allows us to support new AEAD's, in a way that makes decryption easier
* Allows downstream users to add their own AEAD
- Allows us to support new AEAD's, in a way that makes decryption easier
- Allows downstream users to add their own AEAD
### Negative
* We will have to break all private keys stored on disk.
They can be recovered using seed words, and upgrade scripts are simple.
- We will have to break all private keys stored on disk.
They can be recovered using seed words, and upgrade scripts are simple.
### Neutral
* Caller has to instantiate the AEAD with the private key.
However it forces them to be aware of what signing algorithm they are using, which is a positive.
- Caller has to instantiate the AEAD with the private key.
However it forces them to be aware of what signing algorithm they are using, which is a positive.

+ 9
- 7
docs/architecture/adr-014-secp-malleability.md View File

@ -22,21 +22,21 @@ Removing this second layer of signature malleability concerns could ease downstr
### ECDSA context
Secp256k1 is ECDSA over a particular curve.
The signature is of the form `(r, s)`, where `s` is a field element.
The signature is of the form `(r, s)`, where `s` is a field element.
(The particular field is the `Z_n`, where the elliptic curve has order `n`)
However `(r, -s)` is also another valid solution.
Note that anyone can negate a group element, and therefore can get this second signature.
## Decision
We can just distinguish a canonical form for the ECDSA signatures.
We can just distinguish a canonical form for the ECDSA signatures.
Then we require that all ECDSA signatures be in the form which we defined as canonical.
We reject signatures in non-canonical form.
A canonical form is rather easy to define and check.
A canonical form is rather easy to define and check.
It would just be the smaller of the two values for `s`, defined lexicographically.
This is a simple check, instead of checking if `s < n`, instead check `s <= (n - 1)/2`.
An example of another cryptosystem using this
An example of another cryptosystem using this
is the parity definition here https://github.com/zkcrypto/pairing/pull/30#issuecomment-372910663.
This is the same solution Ethereum has chosen for solving secp malleability.
@ -52,10 +52,12 @@ Proposed.
## Consequences
### Positive
* Lets us maintain the ability to expect a tx hash to appear in the blockchain.
- Lets us maintain the ability to expect a tx hash to appear in the blockchain.
### Negative
* More work in all future implementations (Though this is a very simple check)
* Requires us to maintain another fork
- More work in all future implementations (Though this is a very simple check)
- Requires us to maintain another fork
### Neutral

+ 10
- 5
docs/architecture/adr-015-crypto-encoding.md View File

@ -1,8 +1,8 @@
# ADR 015: Crypto encoding
# ADR 015: Crypto encoding
## Context
We must standardize our method for encoding public keys and signatures on chain.
We must standardize our method for encoding public keys and signatures on chain.
Currently we amino encode the public keys and signatures.
The reason we are using amino here is primarily due to ease of support in
parsing for other languages.
@ -54,9 +54,11 @@ When placed in state, signatures will still be amino encoded, but it will be the
primitive type `[]byte` getting encoded.
#### Ed25519
Use the canonical representation for signatures.
#### Secp256k1
There isn't a clear canonical representation here.
Signatures have two elements `r,s`.
These bytes are encoded as `r || s`, where `r` and `s` are both exactly
@ -71,10 +73,13 @@ Needs decision on Enum types.
## Consequences
### Positive
* More space efficient signatures
- More space efficient signatures
### Negative
* We have an amino dependency for cryptography.
- We have an amino dependency for cryptography.
### Neutral
* No change to public keys
- No change to public keys

+ 14
- 19
docs/architecture/adr-016-protocol-versions.md View File

@ -8,14 +8,14 @@
## Changelog
- 03-08-2018: Updates from discussion with Jae:
- ProtocolVersion contains Block/AppVersion, not Current/Next
- signal upgrades to Tendermint using EndBlock fields
- dont restrict peer compatibilty by version to simplify syncing old nodes
- ProtocolVersion contains Block/AppVersion, not Current/Next
- signal upgrades to Tendermint using EndBlock fields
- dont restrict peer compatibilty by version to simplify syncing old nodes
- 28-07-2018: Updates from review
- split into two ADRs - one for protocol, one for chains
- include signalling for upgrades in header
- split into two ADRs - one for protocol, one for chains
- include signalling for upgrades in header
- 16-07-2018: Initial draft - was originally joint ADR for protocol and chain
versions
versions
## Context
@ -59,18 +59,16 @@ to connect to peers with older version.
### BlockVersion
- All tendermint hashed data-structures (headers, votes, txs, responses, etc.).
- Note the semantic meaning of a transaction may change according to the AppVersion,
but the way txs are merklized into the header is part of the BlockVersion
- Note the semantic meaning of a transaction may change according to the AppVersion, but the way txs are merklized into the header is part of the BlockVersion
- It should be the least frequent/likely to change.
- Tendermint should be stabilizing - it's just Atomic Broadcast.
- We can start considering for Tendermint v2.0 in a year
- Tendermint should be stabilizing - it's just Atomic Broadcast.
- We can start considering for Tendermint v2.0 in a year
- It's easy to determine the version of a block from its serialized form
### P2PVersion
- All p2p and reactor messaging (messages, detectable behaviour)
- Will change gradually as reactors evolve to improve performance and support new features
- eg proposed new message types BatchTx in the mempool and HasBlockPart in the consensus
- Will change gradually as reactors evolve to improve performance and support new features - eg proposed new message types BatchTx in the mempool and HasBlockPart in the consensus
- It's easy to determine the version of a peer from its first serialized message/s
- New versions must be compatible with at least one old version to allow gradual upgrades
@ -79,10 +77,10 @@ to connect to peers with older version.
- The ABCI state machine (txs, begin/endblock behaviour, commit hashing)
- Behaviour and message types will change abruptly in the course of the life of a chain
- Need to minimize complexity of the code for supporting different AppVersions at different heights
- Ideally, each version of the software supports only a *single* AppVersion at one time
- this means we checkout different versions of the software at different heights instead of littering the code
with conditionals
- minimize the number of data migrations required across AppVersion (ie. most AppVersion should be able to read the same state from disk as previous AppVersion).
- Ideally, each version of the software supports only a _single_ AppVersion at one time
- this means we checkout different versions of the software at different heights instead of littering the code
with conditionals
- minimize the number of data migrations required across AppVersion (ie. most AppVersion should be able to read the same state from disk as previous AppVersion).
## Ideal
@ -125,7 +123,6 @@ serve as a complete description of the consensus-critical protocol.
Using the `NextVersion` field, proposer's can signal their readiness to upgrade
to a new Block and/or App version.
### NodeInfo
NodeInfo should include a Version struct as its first field like:
@ -150,7 +147,6 @@ it's SemVer version - this is for convenience only. Eg.
The other versions and ChainID will determine peer compatibility (described below).
### ABCI
Since the ABCI is responsible for keeping Tendermint and the App in sync, we
@ -280,7 +276,6 @@ checking out and installing new software versions and restarting the process. It
would subscribe to the relevant upgrade event (needs to be implemented) and call `/unsafe_stop` at
the correct height (of course only after getting approval from its user!)
## Consequences
### Positive


+ 11
- 12
docs/architecture/adr-017-chain-versions.md View File

@ -7,9 +7,9 @@
## Changelog
- 28-07-2018: Updates from review
- split into two ADRs - one for protocol, one for chains
- split into two ADRs - one for protocol, one for chains
- 16-07-2018: Initial draft - was originally joint ADR for protocol and chain
versions
versions
## Context
@ -41,20 +41,19 @@ Peers only connect to other peers with the same NetworkName.
We need to support existing networks upgrading and forking, wherein they may do any of:
- revert back to some height, continue with the same versions but new blocks
- arbitrarily mutate state at some height, continue with the same versions (eg. Dao Fork)
- change the AppVersion at some height
- revert back to some height, continue with the same versions but new blocks
- arbitrarily mutate state at some height, continue with the same versions (eg. Dao Fork)
- change the AppVersion at some height
Note because of Tendermint's voting power threshold rules, a chain can only be extended under the "original" rules and under the new rules
if 1/3 or more is double signing, which is expressly prohibited, and is supposed to result in their punishment on both chains. Since they can censor
the punishment, the chain is expected to be hardforked to remove the validators. Thus, if both branches are to continue after a fork,
they will each require a new identifier, and the old chain identifier will be retired (ie. only useful for syncing history, not for new blocks)..
TODO: explain how to handle slashing when chain id changed!
TODO: explain how to handle slashing when chain id changed!
We need a consistent way to describe forks.
## Proposal
### ChainDescription
@ -92,9 +91,9 @@ ChainDescription = <ChainID>/x/<Height>/<ForkDescription>
```
Where
- ChainID is the ChainID from the previous ChainDescription (ie. its hash)
- `x` denotes that a change occured
- `Height` is the height the change occured
- ForkDescription has the same form as ChainDescription but for the fork
- this allows forks to specify new versions for tendermint or the app, as well as arbitrary changes to the state or validator set
- ChainID is the ChainID from the previous ChainDescription (ie. its hash)
- `x` denotes that a change occured
- `Height` is the height the change occured
- ForkDescription has the same form as ChainDescription but for the fork
- this allows forks to specify new versions for tendermint or the app, as well as arbitrary changes to the state or validator set

+ 29
- 19
docs/architecture/adr-019-multisigs.md View File

@ -1,6 +1,6 @@
# ADR 019: Encoding standard for Multisignatures
## Changelog
## Changelog
06-08-2018: Minor updates
@ -10,9 +10,9 @@
## Context
Multisignatures, or technically _Accountable Subgroup Multisignatures_ (ASM),
are signature schemes which enable any subgroup of a set of signers to sign any message,
and reveal to the verifier exactly who the signers were.
Multisignatures, or technically _Accountable Subgroup Multisignatures_ (ASM),
are signature schemes which enable any subgroup of a set of signers to sign any message,
and reveal to the verifier exactly who the signers were.
This allows for complex conditionals of when to validate a signature.
Suppose the set of signers is of size _n_.
@ -22,7 +22,7 @@ this becomes what is commonly reffered to as a _k of n multisig_ in Bitcoin.
This ADR specifies the encoding standard for general accountable subgroup multisignatures,
k of n accountable subgroup multisignatures, and its weighted variant.
In the future, we can also allow for more complex conditionals on the accountable subgroup.
In the future, we can also allow for more complex conditionals on the accountable subgroup.
## Proposed Solution
@ -42,6 +42,7 @@ type ThresholdMultiSignaturePubKey struct { // K of N threshold multisig
Pubkeys []crypto.Pubkey `json:"pubkeys"`
}
```
We will derive N from the length of pubkeys. (For spatial efficiency in encoding)
`Verify` will expect an `[]byte` encoded version of the Multisignature.
@ -56,7 +57,7 @@ the kth public key on the message.
Address will be `Hash(amino_encoded_pubkey)`
The reason this doesn't use `log_8(n)` bytes per signer is because that heavily optimizes for the case where a very small number of signers are required.
e.g. for `n` of size `24`, that would only be more space efficient for `k < 3`.
e.g. for `n` of size `24`, that would only be more space efficient for `k < 3`.
This seems less likely, and that it should not be the case optimized for.
#### Weighted threshold signature
@ -70,17 +71,19 @@ type WeightedThresholdMultiSignaturePubKey struct {
Pubkeys []crypto.Pubkey `json:"pubkeys"`
}
```
Weights and Pubkeys must be of the same length.
Everything else proceeds identically to the K of N multisig,
Everything else proceeds identically to the K of N multisig,
except the multisig fails if the sum of the weights is less than the threshold.
#### Multisignature
The inter-mediate phase of the signatures (as it accrues more signatures) will be the following struct:
```golang
type Multisignature struct {
BitArray CryptoBitArray // Documented later
Sigs [][]byte
Sigs [][]byte
```
It is important to recall that each private key will output a signature on the provided message itself.
@ -88,24 +91,29 @@ So no signing algorithm ever outputs the multisignature.
The UI will take a signature, cast into a multisignature, and then keep adding
new signatures into it, and when done marshal into `[]byte`.
This will require the following helper methods:
```golang
func SigToMultisig(sig []byte, n int)
func GetIndex(pk crypto.Pubkey, []crypto.Pubkey)
func AddSignature(sig Signature, index int, multiSig *Multisignature)
```
The multisignature will be converted to an `[]byte` using amino.MarshalBinaryBare. \*
#### Bit Array
We would be using a new implementation of a bitarray. The struct it would be encoded/decoded from is
We would be using a new implementation of a bitarray. The struct it would be encoded/decoded from is
```golang
type CryptoBitArray struct {
ExtraBitsStored byte `json:"extra_bits"` // The number of extra bits in elems.
ExtraBitsStored byte `json:"extra_bits"` // The number of extra bits in elems.
Elems []byte `json:"elems"`
}
```
The reason for not using the BitArray currently implemented in `libs/common/bit_array.go`
is that it is less space efficient, due to a space / time trade-off.
Evidence for this is outlined in [this issue](https://github.com/tendermint/tendermint/issues/2077).
Evidence for this is outlined in [this issue](https://github.com/tendermint/tendermint/issues/2077).
In the multisig, we will not be performing arithmetic operations,
so there is no performance increase with the current implementation,
@ -122,7 +130,7 @@ Again the implementation of this space saving feature is straight forward.
### Encoding the structs
We will use straight forward amino encoding. This is chosen for ease of compatibility in other languages.
We will use straight forward amino encoding. This is chosen for ease of compatibility in other languages.
### Future points of discussion
@ -133,18 +141,20 @@ Aggregation of pubkeys / sigs in Schnorr sigs / BLS sigs is not backwards compat
## Status
Proposed.
Proposed.
## Consequences
### Positive
* Supports multisignatures, in a way that won't require any special cases in our downstream verification code.
* Easy to serialize / deserialize
* Unbounded number of signers
- Supports multisignatures, in a way that won't require any special cases in our downstream verification code.
- Easy to serialize / deserialize
- Unbounded number of signers
### Negative
* Larger codebase, however this should reside in a subfolder of tendermint/crypto, as it provides no new interfaces. (Ref #https://github.com/tendermint/go-crypto/issues/136)
* Space inefficient due to utilization of amino encoding
* Suggested implementation requires a new struct for every ASM.
- Larger codebase, however this should reside in a subfolder of tendermint/crypto, as it provides no new interfaces. (Ref #https://github.com/tendermint/go-crypto/issues/136)
- Space inefficient due to utilization of amino encoding
- Suggested implementation requires a new struct for every ASM.
### Neutral

+ 0
- 1
docs/architecture/adr-template.md View File

@ -6,7 +6,6 @@
## Status
## Consequences
### Positive


+ 5
- 2
docs/package.json View File

@ -3,7 +3,9 @@
"prettier": "^1.13.7",
"remark-cli": "^5.0.0",
"remark-lint-no-dead-urls": "^0.3.0",
"textlint": "^10.2.1"
"remark-lint-write-good": "^1.0.3",
"textlint": "^10.2.1",
"textlint-rule-stop-words": "^1.0.3"
},
"name": "tendermint",
"description": "Tendermint Core Documentation",
@ -31,7 +33,8 @@
"homepage": "https://tendermint.com/docs/",
"remarkConfig": {
"plugins": [
"remark-lint-no-dead-urls"
"remark-lint-no-dead-urls",
"remark-lint-write-good"
]
}
}

+ 3
- 3
docs/spec/README.md View File

@ -57,7 +57,7 @@ is malicious or faulty.
A commit in Tendermint is a set of signed messages from more than 2/3 of
the total weight of the current Validator set. Validators take turns proposing
blocks and voting on them. Once enough votes are received, the block is considered
committed. These votes are included in the *next* block as proof that the previous block
committed. These votes are included in the _next_ block as proof that the previous block
was committed - they cannot be included in the current block, as that block has already been
created.
@ -71,8 +71,8 @@ of the latest state of the blockchain. To achieve this, it embeds
cryptographic commitments to certain information in the block "header".
This information includes the contents of the block (eg. the transactions),
the validator set committing the block, as well as the various results returned by the application.
Note, however, that block execution only occurs *after* a block is committed.
Thus, application results can only be included in the *next* block.
Note, however, that block execution only occurs _after_ a block is committed.
Thus, application results can only be included in the _next_ block.
Also note that information like the transaction results and the validator set are never
directly included in the block - only their cryptographic digests (Merkle roots) are.


+ 7
- 9
docs/spec/blockchain/blockchain.md View File

@ -104,8 +104,8 @@ type Vote struct {
```
There are two types of votes:
a *prevote* has `vote.Type == 1` and
a *precommit* has `vote.Type == 2`.
a _prevote_ has `vote.Type == 1` and
a _precommit_ has `vote.Type == 2`.
## Signature
@ -162,10 +162,10 @@ We refer to certain globally available objects:
`prevBlock` is the `block` at the previous height,
and `state` keeps track of the validator set, the consensus parameters
and other results from the application. At the point when `block` is the block under consideration,
the current version of the `state` corresponds to the state
after executing transactions from the `prevBlock`.
the current version of the `state` corresponds to the state
after executing transactions from the `prevBlock`.
Elements of an object are accessed as expected,
ie. `block.Header`.
ie. `block.Header`.
See [here](https://github.com/tendermint/tendermint/blob/master/docs/spec/blockchain/state.md) for the definition of `state`.
### Header
@ -288,6 +288,7 @@ This can be used to validate the `LastCommit` included in the next block.
```go
block.NextValidatorsHash == SimpleMerkleRoot(state.NextValidators)
```
Simple Merkle root of the next validator set that will be the validator set that commits the next block.
Modifications to the validator set are defined by the application.
@ -427,11 +428,8 @@ Execute(s State, app ABCIApp, block Block) State {
AppHash: AppHash,
LastValidators: state.Validators,
Validators: state.NextValidators,
NextValidators: UpdateValidators(state.NextValidators, ValidatorChanges),
NextValidators: UpdateValidators(state.NextValidators, ValidatorChanges),
ConsensusParams: UpdateConsensusParams(state.ConsensusParams, ConsensusParamChanges),
}
}
```

+ 30
- 33
docs/spec/blockchain/encoding.md View File

@ -48,33 +48,33 @@ spec](https://github.com/tendermint/go-amino#computing-the-prefix-and-disambigua
In what follows, we provide the type names and prefix bytes directly.
Notice that when encoding byte-arrays, the length of the byte-array is appended
to the PrefixBytes. Thus the encoding of a byte array becomes `<PrefixBytes>
<Length> <ByteArray>`. In other words, to encode any type listed below you do not need to be
to the PrefixBytes. Thus the encoding of a byte array becomes `<PrefixBytes> <Length> <ByteArray>`. In other words, to encode any type listed below you do not need to be
familiar with amino encoding.
You can simply use below table and concatenate Prefix || Length (of raw bytes) || raw bytes
( while || stands for byte concatenation here).
| Type | Name | Prefix | Length | Notes |
| ---- | ---- | ------ | ----- | ------ |
| PubKeyEd25519 | tendermint/PubKeyEd25519 | 0x1624DE64 | 0x20 | |
| PubKeySecp256k1 | tendermint/PubKeySecp256k1 | 0xEB5AE987 | 0x21 | |
| PrivKeyEd25519 | tendermint/PrivKeyEd25519 | 0xA3288910 | 0x40 | |
| PrivKeySecp256k1 | tendermint/PrivKeySecp256k1 | 0xE1B0F79B | 0x20 | |
| SignatureEd25519 | tendermint/SignatureEd25519 | 0x2031EA53 | 0x40 | |
| Type | Name | Prefix | Length | Notes |
| ------------------ | ----------------------------- | ---------- | -------- | ----- |
| PubKeyEd25519 | tendermint/PubKeyEd25519 | 0x1624DE64 | 0x20 | |
| PubKeySecp256k1 | tendermint/PubKeySecp256k1 | 0xEB5AE987 | 0x21 | |
| PrivKeyEd25519 | tendermint/PrivKeyEd25519 | 0xA3288910 | 0x40 | |
| PrivKeySecp256k1 | tendermint/PrivKeySecp256k1 | 0xE1B0F79B | 0x20 | |
| SignatureEd25519 | tendermint/SignatureEd25519 | 0x2031EA53 | 0x40 | |
| SignatureSecp256k1 | tendermint/SignatureSecp256k1 | 0x7FC4A495 | variable |
|
### Examples
1. For example, the 33-byte (or 0x21-byte in hex) Secp256k1 pubkey
`020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
would be encoded as
`EB5AE98221020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
`020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
would be encoded as
`EB5AE98221020BD40F225A57ED383B440CF073BC5539D0341F5767D2BF2D78406D00475A2EE9`
2. For example, the variable size Secp256k1 signature (in this particular example 70 or 0x46 bytes)
`304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
would be encoded as
`16E1FEEA46304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
`304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
would be encoded as
`16E1FEEA46304402201CD4B8C764D2FD8AF23ECFE6666CA8A53886D47754D951295D2D311E1FEA33BF02201E0F906BB1CF2C30EAACFFB032A7129358AFF96B9F79B06ACFFB18AC90C2ADD7`
### Addresses
@ -152,28 +152,27 @@ func MakeParts(obj interface{}, partSize int) []Part
For an overview of Merkle trees, see
[wikipedia](https://en.wikipedia.org/wiki/Merkle_tree)
A Simple Tree is a simple compact binary tree for a static list of items. Simple Merkle trees are used in numerous places in Tendermint to compute a cryptographic digest of a data structure. In a Simple Tree, the transactions and validation signatures of a block are hashed using this simple merkle tree logic.
If the number of items is not a power of two, the tree will not be full
and some leaf nodes will be at different levels. Simple Tree tries to
keep both sides of the tree the same size, but the left side may be one
greater, for example:
greater, for example:
```
Simple Tree with 6 items Simple Tree with 7 items
* *
/ \ / \
/ \ / \
/ \ / \
/ \ / \
* * * *
/ \ / \ / \ / \
/ \ / \ / \ / \
/ \ / \ / \ / \
Simple Tree with 6 items Simple Tree with 7 items
* *
/ \ / \
/ \ / \
/ \ / \
/ \ / \
* * * *
/ \ / \ / \ / \
/ \ / \ / \ / \
/ \ / \ / \ / \
* h2 * h5 * * * h6
/ \ / \ / \ / \ / \
/ \ / \ / \ / \ / \
h0 h1 h3 h4 h0 h1 h2 h3 h4 h5
```
@ -224,7 +223,6 @@ For `[]struct` arguments, we compute a `[][]byte` by hashing the individual `str
Proof that a leaf is in a Merkle tree consists of a simple structure:
```
type SimpleProof struct {
Aunts [][]byte
@ -265,8 +263,8 @@ func computeHashFromAunts(index, total int, leafHash []byte, innerHashes [][]byt
The Simple Tree is used to merkelize a list of items, so to merkelize a
(short) dictionary of key-value pairs, encode the dictionary as an
ordered list of ``KVPair`` structs. The block hash is such a hash
derived from all the fields of the block ``Header``. The state hash is
ordered list of `KVPair` structs. The block hash is such a hash
derived from all the fields of the block `Header`. The state hash is
similarly derived.
### IAVL+ Tree
@ -300,7 +298,6 @@ For instance, an ED25519 PubKey would look like:
Where the `"value"` is the base64 encoding of the raw pubkey bytes, and the
`"type"` is the full disfix bytes for Ed25519 pubkeys.
### Signed Messages
Signed messages (eg. votes, proposals) in the consensus are encoded using Amino-JSON, rather than in the standard binary format.


+ 0
- 1
docs/spec/blockchain/state.md View File

@ -75,7 +75,6 @@ func TotalVotingPower(vals []Validators) int64{
}
```
### ConsensusParams
TODO

+ 41
- 44
docs/spec/consensus/bft-time.md View File

@ -1,56 +1,53 @@
# BFT time in Tendermint
# BFT time in Tendermint
Tendermint provides a deterministic, Byzantine fault-tolerant, source of time.
Time in Tendermint is defined with the Time field of the block header.
Tendermint provides a deterministic, Byzantine fault-tolerant, source of time.
Time in Tendermint is defined with the Time field of the block header.
It satisfies the following properties:
- Time Monotonicity: Time is monotonically increasing, i.e., given
a header H1 for height h1 and a header H2 for height `h2 = h1 + 1`, `H1.Time < H2.Time`.
- Time Validity: Given a set of Commit votes that forms the `block.LastCommit` field, a range of
valid values for the Time field of the block header is defined only by
Precommit messages (from the LastCommit field) sent by correct processes, i.e.,
a faulty process cannot arbitrarily increase the Time value.
- Time Monotonicity: Time is monotonically increasing, i.e., given
a header H1 for height h1 and a header H2 for height `h2 = h1 + 1`, `H1.Time < H2.Time`.
- Time Validity: Given a set of Commit votes that forms the `block.LastCommit` field, a range of
valid values for the Time field of the block header is defined only by
Precommit messages (from the LastCommit field) sent by correct processes, i.e.,
a faulty process cannot arbitrarily increase the Time value.
In the context of Tendermint, time is of type int64 and denotes UNIX time in milliseconds, i.e.,
corresponds to the number of milliseconds since January 1, 1970. Before defining rules that need to be enforced by the
In the context of Tendermint, time is of type int64 and denotes UNIX time in milliseconds, i.e.,
corresponds to the number of milliseconds since January 1, 1970. Before defining rules that need to be enforced by the
Tendermint consensus protocol, so the properties above holds, we introduce the following definition:
- median of a set of `Vote` messages is equal to the median of `Vote.Time` fields of the corresponding `Vote` messages,
where the value of `Vote.Time` is counted number of times proportional to the process voting power. As in Tendermint
the voting power is not uniform (one process one vote), a vote message is actually an aggregator of the same votes whose
number is equal to the voting power of the process that has casted the corresponding votes message.
where the value of `Vote.Time` is counted number of times proportional to the process voting power. As in Tendermint
the voting power is not uniform (one process one vote), a vote message is actually an aggregator of the same votes whose
number is equal to the voting power of the process that has casted the corresponding votes message.
Let's consider the following example:
- we have four processes p1, p2, p3 and p4, with the following voting power distribution (p1, 23), (p2, 27), (p3, 10)
and (p4, 10). The total voting power is 70 (`N = 3f+1`, where `N` is the total voting power, and `f` is the maximum voting
power of the faulty processes), so we assume that the faulty processes have at most 23 of voting power.
Furthermore, we have the following vote messages in some LastCommit field (we ignore all fields except Time field):
- (p1, 100), (p2, 98), (p3, 1000), (p4, 500). We assume that p3 and p4 are faulty processes. Let's assume that the
`block.LastCommit` message contains votes of processes p2, p3 and p4. Median is then chosen the following way:
the value 98 is counted 27 times, the value 1000 is counted 10 times and the value 500 is counted also 10 times.
So the median value will be the value 98. No matter what set of messages with at least `2f+1` voting power we
choose, the median value will always be between the values sent by correct processes.
We ensure Time Monotonicity and Time Validity properties by the following rules:
- let rs denotes `RoundState` (consensus internal state) of some process. Then
`rs.ProposalBlock.Header.Time == median(rs.LastCommit) &&
rs.Proposal.Timestamp == rs.ProposalBlock.Header.Time`.
- Furthermore, when creating the `vote` message, the following rules for determining `vote.Time` field should hold:
- if `rs.Proposal` is defined then
`vote.Time = max(rs.Proposal.Timestamp + 1, time.Now())`, where `time.Now()`
denotes local Unix time in milliseconds.
- if `rs.Proposal` is not defined and `rs.Votes` contains +2/3 of the corresponding vote messages (votes for the
current height and round, and with the corresponding type (`Prevote` or `Precommit`)), then
`vote.Time = max(median(getVotes(rs.Votes, vote.Height, vote.Round, vote.Type)), time.Now())`,
where `getVotes` function returns the votes for particular `Height`, `Round` and `Type`.
The second rule is relevant for the case when a process jumps to a higher round upon receiving +2/3 votes for a higher
round, but the corresponding `Proposal` message for the higher round hasn't been received yet.
- we have four processes p1, p2, p3 and p4, with the following voting power distribution (p1, 23), (p2, 27), (p3, 10)
and (p4, 10). The total voting power is 70 (`N = 3f+1`, where `N` is the total voting power, and `f` is the maximum voting
power of the faulty processes), so we assume that the faulty processes have at most 23 of voting power.
Furthermore, we have the following vote messages in some LastCommit field (we ignore all fields except Time field): - (p1, 100), (p2, 98), (p3, 1000), (p4, 500). We assume that p3 and p4 are faulty processes. Let's assume that the
`block.LastCommit` message contains votes of processes p2, p3 and p4. Median is then chosen the following way:
the value 98 is counted 27 times, the value 1000 is counted 10 times and the value 500 is counted also 10 times.
So the median value will be the value 98. No matter what set of messages with at least `2f+1` voting power we
choose, the median value will always be between the values sent by correct processes.
We ensure Time Monotonicity and Time Validity properties by the following rules:
- let rs denotes `RoundState` (consensus internal state) of some process. Then
`rs.ProposalBlock.Header.Time == median(rs.LastCommit) && rs.Proposal.Timestamp == rs.ProposalBlock.Header.Time`.
- Furthermore, when creating the `vote` message, the following rules for determining `vote.Time` field should hold:
- if `rs.Proposal` is defined then
`vote.Time = max(rs.Proposal.Timestamp + 1, time.Now())`, where `time.Now()`
denotes local Unix time in milliseconds.
- if `rs.Proposal` is not defined and `rs.Votes` contains +2/3 of the corresponding vote messages (votes for the
current height and round, and with the corresponding type (`Prevote` or `Precommit`)), then
`vote.Time = max(median(getVotes(rs.Votes, vote.Height, vote.Round, vote.Type)), time.Now())`,
where `getVotes` function returns the votes for particular `Height`, `Round` and `Type`.
The second rule is relevant for the case when a process jumps to a higher round upon receiving +2/3 votes for a higher
round, but the corresponding `Proposal` message for the higher round hasn't been received yet.

+ 87
- 87
docs/spec/consensus/consensus.md View File

@ -2,31 +2,31 @@
## Terms
- The network is composed of optionally connected *nodes*. Nodes
directly connected to a particular node are called *peers*.
- The consensus process in deciding the next block (at some *height*
`H`) is composed of one or many *rounds*.
- `NewHeight`, `Propose`, `Prevote`, `Precommit`, and `Commit`
represent state machine states of a round. (aka `RoundStep` or
just "step").
- A node is said to be *at* a given height, round, and step, or at
`(H,R,S)`, or at `(H,R)` in short to omit the step.
- To *prevote* or *precommit* something means to broadcast a [prevote
vote](https://godoc.org/github.com/tendermint/tendermint/types#Vote)
or [first precommit
vote](https://godoc.org/github.com/tendermint/tendermint/types#FirstPrecommit)
for something.
- A vote *at* `(H,R)` is a vote signed with the bytes for `H` and `R`
included in its [sign-bytes](block-structure.html#vote-sign-bytes).
- *+2/3* is short for "more than 2/3"
- *1/3+* is short for "1/3 or more"
- A set of +2/3 of prevotes for a particular block or `<nil>` at
`(H,R)` is called a *proof-of-lock-change* or *PoLC* for short.
- The network is composed of optionally connected _nodes_. Nodes
directly connected to a particular node are called _peers_.
- The consensus process in deciding the next block (at some _height_
`H`) is composed of one or many _rounds_.
- `NewHeight`, `Propose`, `Prevote`, `Precommit`, and `Commit`
represent state machine states of a round. (aka `RoundStep` or
just "step").
- A node is said to be _at_ a given height, round, and step, or at
`(H,R,S)`, or at `(H,R)` in short to omit the step.
- To _prevote_ or _precommit_ something means to broadcast a [prevote
vote](https://godoc.org/github.com/tendermint/tendermint/types#Vote)
or [first precommit
vote](https://godoc.org/github.com/tendermint/tendermint/types#FirstPrecommit)
for something.
- A vote _at_ `(H,R)` is a vote signed with the bytes for `H` and `R`
included in its [sign-bytes](block-structure.html#vote-sign-bytes).
- _+2/3_ is short for "more than 2/3"
- _1/3+_ is short for "1/3 or more"
- A set of +2/3 of prevotes for a particular block or `<nil>` at
`(H,R)` is called a _proof-of-lock-change_ or _PoLC_ for short.
## State Machine Overview
At each height of the blockchain a round-based protocol is run to
determine the next block. Each round is composed of three *steps*
determine the next block. Each round is composed of three _steps_
(`Propose`, `Prevote`, and `Precommit`), along with two special steps
`Commit` and `NewHeight`.
@ -36,22 +36,22 @@ In the optimal scenario, the order of steps is:
NewHeight -> (Propose -> Prevote -> Precommit)+ -> Commit -> NewHeight ->...
```
The sequence `(Propose -> Prevote -> Precommit)` is called a *round*.
The sequence `(Propose -> Prevote -> Precommit)` is called a _round_.
There may be more than one round required to commit a block at a given
height. Examples for why more rounds may be required include:
- The designated proposer was not online.
- The block proposed by the designated proposer was not valid.
- The block proposed by the designated proposer did not propagate
in time.
- The block proposed was valid, but +2/3 of prevotes for the proposed
block were not received in time for enough validator nodes by the
time they reached the `Precommit` step. Even though +2/3 of prevotes
are necessary to progress to the next step, at least one validator
may have voted `<nil>` or maliciously voted for something else.
- The block proposed was valid, and +2/3 of prevotes were received for
enough nodes, but +2/3 of precommits for the proposed block were not
received for enough validator nodes.
- The designated proposer was not online.
- The block proposed by the designated proposer was not valid.
- The block proposed by the designated proposer did not propagate
in time.
- The block proposed was valid, but +2/3 of prevotes for the proposed
block were not received in time for enough validator nodes by the
time they reached the `Precommit` step. Even though +2/3 of prevotes
are necessary to progress to the next step, at least one validator
may have voted `<nil>` or maliciously voted for something else.
- The block proposed was valid, and +2/3 of prevotes were received for
enough nodes, but +2/3 of precommits for the proposed block were not
received for enough validator nodes.
Some of these problems are resolved by moving onto the next round &
proposer. Others are resolved by increasing certain round timeout
@ -80,14 +80,13 @@ parameters over each successive round.
+--------------------------------------------------------------------+
```
Background Gossip
=================
# Background Gossip
A node may not have a corresponding validator private key, but it
nevertheless plays an active role in the consensus process by relaying
relevant meta-data, proposals, blocks, and votes to its peers. A node
that has the private keys of an active validator and is engaged in
signing votes is called a *validator-node*. All nodes (not just
signing votes is called a _validator-node_. All nodes (not just
validator-nodes) have an associated state (the current height, round,
and step) and work to make progress.
@ -97,21 +96,21 @@ epidemic gossip protocol is implemented among some of these channels to
bring peers up to speed on the most recent state of consensus. For
example,
- Nodes gossip `PartSet` parts of the current round's proposer's
proposed block. A LibSwift inspired algorithm is used to quickly
broadcast blocks across the gossip network.
- Nodes gossip prevote/precommit votes. A node `NODE_A` that is ahead
of `NODE_B` can send `NODE_B` prevotes or precommits for `NODE_B`'s
current (or future) round to enable it to progress forward.
- Nodes gossip prevotes for the proposed PoLC (proof-of-lock-change)
round if one is proposed.
- Nodes gossip to nodes lagging in blockchain height with block
[commits](https://godoc.org/github.com/tendermint/tendermint/types#Commit)
for older blocks.
- Nodes opportunistically gossip `HasVote` messages to hint peers what
votes it already has.
- Nodes broadcast their current state to all neighboring peers. (but
is not gossiped further)
- Nodes gossip `PartSet` parts of the current round's proposer's
proposed block. A LibSwift inspired algorithm is used to quickly
broadcast blocks across the gossip network.
- Nodes gossip prevote/precommit votes. A node `NODE_A` that is ahead
of `NODE_B` can send `NODE_B` prevotes or precommits for `NODE_B`'s
current (or future) round to enable it to progress forward.
- Nodes gossip prevotes for the proposed PoLC (proof-of-lock-change)
round if one is proposed.
- Nodes gossip to nodes lagging in blockchain height with block
[commits](https://godoc.org/github.com/tendermint/tendermint/types#Commit)
for older blocks.
- Nodes opportunistically gossip `HasVote` messages to hint peers what
votes it already has.
- Nodes broadcast their current state to all neighboring peers. (but
is not gossiped further)
There's more, but let's not get ahead of ourselves here.
@ -144,14 +143,14 @@ and all prevotes at `PoLC-Round`. --> goto `Prevote(H,R)` - After
Upon entering `Prevote`, each validator broadcasts its prevote vote.
- First, if the validator is locked on a block since `LastLockRound`
but now has a PoLC for something else at round `PoLC-Round` where
`LastLockRound < PoLC-Round < R`, then it unlocks.
- If the validator is still locked on a block, it prevotes that.
- Else, if the proposed block from `Propose(H,R)` is good, it
prevotes that.
- Else, if the proposal is invalid or wasn't received on time, it
prevotes `<nil>`.
- First, if the validator is locked on a block since `LastLockRound`
but now has a PoLC for something else at round `PoLC-Round` where
`LastLockRound < PoLC-Round < R`, then it unlocks.
- If the validator is still locked on a block, it prevotes that.
- Else, if the proposed block from `Propose(H,R)` is good, it
prevotes that.
- Else, if the proposal is invalid or wasn't received on time, it
prevotes `<nil>`.
The `Prevote` step ends: - After +2/3 prevotes for a particular block or
`<nil>`. -->; goto `Precommit(H,R)` - After `timeoutPrevote` after
@ -161,11 +160,12 @@ receiving any +2/3 prevotes. --> goto `Precommit(H,R)` - After
### Precommit Step (height:H,round:R)
Upon entering `Precommit`, each validator broadcasts its precommit vote.
- If the validator has a PoLC at `(H,R)` for a particular block `B`, it
(re)locks (or changes lock to) and precommits `B` and sets
`LastLockRound = R`. - Else, if the validator has a PoLC at `(H,R)` for
`<nil>`, it unlocks and precommits `<nil>`. - Else, it keeps the lock
unchanged and precommits `<nil>`.
(re)locks (or changes lock to) and precommits `B` and sets
`LastLockRound = R`. - Else, if the validator has a PoLC at `(H,R)` for
`<nil>`, it unlocks and precommits `<nil>`. - Else, it keeps the lock
unchanged and precommits `<nil>`.
A precommit for `<nil>` means "I didn’t see a PoLC for this round, but I
did get +2/3 prevotes and waited a bit".
@ -177,24 +177,24 @@ conditions](#common-exit-conditions)
### Common exit conditions
- After +2/3 precommits for a particular block. --> goto
`Commit(H)`
- After any +2/3 prevotes received at `(H,R+x)`. --> goto
`Prevote(H,R+x)`
- After any +2/3 precommits received at `(H,R+x)`. --> goto
`Precommit(H,R+x)`
- After +2/3 precommits for a particular block. --> goto
`Commit(H)`
- After any +2/3 prevotes received at `(H,R+x)`. --> goto
`Prevote(H,R+x)`
- After any +2/3 precommits received at `(H,R+x)`. --> goto
`Precommit(H,R+x)`
### Commit Step (height:H)
- Set `CommitTime = now()`
- Wait until block is received. --> goto `NewHeight(H+1)`
- Set `CommitTime = now()`
- Wait until block is received. --> goto `NewHeight(H+1)`
### NewHeight Step (height:H)
- Move `Precommits` to `LastCommit` and increment height.
- Set `StartTime = CommitTime+timeoutCommit`
- Wait until `StartTime` to receive straggler commits. --> goto
`Propose(H,0)`
- Move `Precommits` to `LastCommit` and increment height.
- Set `StartTime = CommitTime+timeoutCommit`
- Wait until `StartTime` to receive straggler commits. --> goto
`Propose(H,0)`
## Proofs
@ -236,20 +236,20 @@ Further, define the JSet at height `H` of a set of validators `VSet` to
be the union of the JSets for each validator in `VSet`. For a given
commit by honest validators at round `R` for block `B` we can construct
a JSet to justify the commit for `B` at `R`. We say that a JSet
*justifies* a commit at `(H,R)` if all the committers (validators in the
_justifies_ a commit at `(H,R)` if all the committers (validators in the
commit-set) are each justified in the JSet with no duplicitous vote
signatures (by the committers).
- **Lemma**: When a fork is detected by the existence of two
conflicting [commits](./validators.html#commiting-a-block), the
union of the JSets for both commits (if they can be compiled) must
include double-signing by at least 1/3+ of the validator set.
**Proof**: The commit cannot be at the same round, because that
would immediately imply double-signing by 1/3+. Take the union of
the JSets of both commits. If there is no double-signing by at least
1/3+ of the validator set in the union, then no honest validator
could have precommitted any different block after the first commit.
Yet, +2/3 did. Reductio ad absurdum.
- **Lemma**: When a fork is detected by the existence of two
conflicting [commits](./validators.html#commiting-a-block), the
union of the JSets for both commits (if they can be compiled) must
include double-signing by at least 1/3+ of the validator set.
**Proof**: The commit cannot be at the same round, because that
would immediately imply double-signing by 1/3+. Take the union of
the JSets of both commits. If there is no double-signing by at least
1/3+ of the validator set in the union, then no honest validator
could have precommitted any different block after the first commit.
Yet, +2/3 did. Reductio ad absurdum.
As a corollary, when there is a fork, an external process can determine
the blame by requiring each validator to justify all of its round votes.


+ 34
- 35
docs/spec/consensus/light-client.md View File

@ -1,14 +1,14 @@
# Light client
A light client is a process that connects to the Tendermint Full Node(s) and then tries to verify the Merkle proofs
about the blockchain application. In this document we describe mechanisms that ensures that the Tendermint light client
has the same level of security as Full Node processes (without being itself a Full Node).
A light client is a process that connects to the Tendermint Full Node(s) and then tries to verify the Merkle proofs
about the blockchain application. In this document we describe mechanisms that ensures that the Tendermint light client
has the same level of security as Full Node processes (without being itself a Full Node).
To be able to validate a Merkle proof, a light client needs to validate the blockchain header that contains the root app hash.
Validating a blockchain header in Tendermint consists in verifying that the header is committed (signed) by >2/3 of the
voting power of the corresponding validator set. As the validator set is a dynamic set (it is changing), one of the
core functionality of the light client is updating the current validator set, that is then used to verify the
blockchain header, and further the corresponding Merkle proofs.
To be able to validate a Merkle proof, a light client needs to validate the blockchain header that contains the root app hash.
Validating a blockchain header in Tendermint consists in verifying that the header is committed (signed) by >2/3 of the
voting power of the corresponding validator set. As the validator set is a dynamic set (it is changing), one of the
core functionality of the light client is updating the current validator set, that is then used to verify the
blockchain header, and further the corresponding Merkle proofs.
For the purpose of this light client specification, we assume that the Tendermint Full Node exposes the following functions over
Tendermint RPC:
@ -19,51 +19,50 @@ Validators(height int64) (ResultValidators, error) // returns validator set for
LastHeader(valSetNumber int64) (SignedHeader, error) // returns last header signed by the validator set with the given validator set number
type SignedHeader struct {
Header Header
Header Header
Commit Commit
ValSetNumber int64
ValSetNumber int64
}
type ResultValidators struct {
BlockHeight int64
Validators []Validator
// time the current validator set is initialised, i.e, time of the last validator change before header BlockHeight
ValSetTime int64
BlockHeight int64
Validators []Validator
// time the current validator set is initialised, i.e, time of the last validator change before header BlockHeight
ValSetTime int64
}
```
We assume that Tendermint keeps track of the validator set changes and that each time a validator set is changed it is
being assigned the next sequence number. We can call this number the validator set sequence number. Tendermint also remembers
We assume that Tendermint keeps track of the validator set changes and that each time a validator set is changed it is
being assigned the next sequence number. We can call this number the validator set sequence number. Tendermint also remembers
the Time from the header when the next validator set is initialised (starts to be in power), and we refer to this time
as validator set init time.
Furthermore, we assume that each validator set change is signed (committed) by the current validator set. More precisely,
given a block `H` that contains transactions that are modifying the current validator set, the Merkle root hash of the next
validator set (modified based on transactions from block H) will be in block `H+1` (and signed by the current validator
set), and then starting from the block `H+2`, it will be signed by the next validator set.
Note that the real Tendermint RPC API is slightly different (for example, response messages contain more data and function
names are slightly different); we shortened (and modified) it for the purpose of this document to make the spec more
clear and simple. Furthermore, note that in case of the third function, the returned header has `ValSetNumber` equals to
`valSetNumber+1`.
given a block `H` that contains transactions that are modifying the current validator set, the Merkle root hash of the next
validator set (modified based on transactions from block H) will be in block `H+1` (and signed by the current validator
set), and then starting from the block `H+2`, it will be signed by the next validator set.
Note that the real Tendermint RPC API is slightly different (for example, response messages contain more data and function
names are slightly different); we shortened (and modified) it for the purpose of this document to make the spec more
clear and simple. Furthermore, note that in case of the third function, the returned header has `ValSetNumber` equals to
`valSetNumber+1`.
Locally, light client manages the following state:
```golang
valSet []Validator // current validator set (last known and verified validator set)
valSetNumber int64 // sequence number of the current validator set
valSet []Validator // current validator set (last known and verified validator set)
valSetNumber int64 // sequence number of the current validator set
valSetHash []byte // hash of the current validator set
valSetTime int64 // time when the current validator set is initialised
valSetTime int64 // time when the current validator set is initialised
```
The light client is initialised with the trusted validator set, for example based on the known validator set hash,
validator set sequence number and the validator set init time.
The core of the light client logic is captured by the VerifyAndUpdate function that is used to 1) verify if the given header is valid,
and 2) update the validator set (when the given header is valid and it is more recent than the seen headers).
and 2) update the validator set (when the given header is valid and it is more recent than the seen headers).
```golang
VerifyAndUpdate(signedHeader SignedHeader):
assertThat signedHeader.valSetNumber >= valSetNumber
assertThat signedHeader.valSetNumber >= valSetNumber
if isValid(signedHeader) and signedHeader.Header.Time <= valSetTime + UNBONDING_PERIOD then
setValidatorSet(signedHeader)
return true
@ -76,7 +75,7 @@ isValid(signedHeader SignedHeader):
assertThat Hash(valSetOfTheHeader) == signedHeader.Header.ValSetHash
assertThat signedHeader is passing basic validation
if votingPower(signedHeader.Commit) > 2/3 * votingPower(valSetOfTheHeader) then return true
else
else
return false
setValidatorSet(signedHeader SignedHeader):
@ -85,7 +84,7 @@ setValidatorSet(signedHeader SignedHeader):
valSet = nextValSet.Validators
valSetHash = signedHeader.Header.ValidatorsHash
valSetNumber = signedHeader.ValSetNumber
valSetTime = nextValSet.ValSetTime
valSetTime = nextValSet.ValSetTime
votingPower(commit Commit):
votingPower = 0
@ -96,9 +95,9 @@ votingPower(commit Commit):
votingPower(validatorSet []Validator):
for each validator in validatorSet do:
votingPower += validator.VotingPower
votingPower += validator.VotingPower
return votingPower
updateValidatorSet(valSetNumberOfTheHeader):
while valSetNumber != valSetNumberOfTheHeader do
signedHeader = LastHeader(valSetNumber)
@ -110,5 +109,5 @@ updateValidatorSet(valSetNumberOfTheHeader):
Note that in the logic above we assume that the light client will always go upward with respect to header verifications,
i.e., that it will always be used to verify more recent headers. In case a light client needs to be used to verify older
headers (go backward) the same mechanisms and similar logic can be used. In case a call to the FullNode or subsequent
checks fail, a light client need to implement some recovery strategy, for example connecting to other FullNode.
headers (go backward) the same mechanisms and similar logic can be used. In case a call to the FullNode or subsequent
checks fail, a light client need to implement some recovery strategy, for example connecting to other FullNode.

+ 6
- 5
docs/spec/p2p/connection.md View File

@ -4,10 +4,10 @@
`MConnection` is a multiplex connection that supports multiple independent streams
with distinct quality of service guarantees atop a single TCP connection.
Each stream is known as a `Channel` and each `Channel` has a globally unique *byte id*.
Each stream is known as a `Channel` and each `Channel` has a globally unique _byte id_.
Each `Channel` also has a relative priority that determines the quality of service
of the `Channel` compared to other `Channel`s.
The *byte id* and the relative priorities of each `Channel` are configured upon
The _byte id_ and the relative priorities of each `Channel` are configured upon
initialization of the connection.
The `MConnection` supports three packet types:
@ -53,13 +53,14 @@ Messages are chosen for a batch one at a time from the channel with the lowest r
## Sending Messages
There are two methods for sending messages:
```go
func (m MConnection) Send(chID byte, msg interface{}) bool {}
func (m MConnection) TrySend(chID byte, msg interface{}) bool {}
```
`Send(chID, msg)` is a blocking call that waits until `msg` is successfully queued
for the channel with the given id byte `chID`. The message `msg` is serialized
for the channel with the given id byte `chID`. The message `msg` is serialized
using the `tendermint/wire` submodule's `WriteBinary()` reflection routine.
`TrySend(chID, msg)` is a nonblocking call that queues the message msg in the channel
@ -76,8 +77,8 @@ and other higher level thread-safe data used by the reactors.
## Switch/Reactor
The `Switch` handles peer connections and exposes an API to receive incoming messages
on `Reactors`. Each `Reactor` is responsible for handling incoming messages of one
or more `Channels`. So while sending outgoing messages is typically performed on the peer,
on `Reactors`. Each `Reactor` is responsible for handling incoming messages of one
or more `Channels`. So while sending outgoing messages is typically performed on the peer,
incoming messages are received on the reactor.
```go


+ 2
- 1
docs/spec/p2p/node.md View File

@ -17,8 +17,9 @@ See [the peer-exchange docs](https://github.com/tendermint/tendermint/blob/maste
## New Full Node
A new node needs a few things to connect to the network:
- a list of seeds, which can be provided to Tendermint via config file or flags,
or hardcoded into the software by in-process apps
or hardcoded into the software by in-process apps
- a `ChainID`, also called `Network` at the p2p layer
- a recent block height, H, and hash, HASH for the blockchain.


+ 8
- 9
docs/spec/p2p/peer.md View File

@ -29,26 +29,26 @@ Both handshakes have configurable timeouts (they should complete quickly).
Tendermint implements the Station-to-Station protocol
using X25519 keys for Diffie-Helman key-exchange and chacha20poly1305 for encryption.
It goes as follows:
- generate an ephemeral X25519 keypair
- send the ephemeral public key to the peer
- wait to receive the peer's ephemeral public key
- compute the Diffie-Hellman shared secret using the peers ephemeral public key and our ephemeral private key
- generate two keys to use for encryption (sending and receiving) and a challenge for authentication as follows:
- create a hkdf-sha256 instance with the key being the diffie hellman shared secret, and info parameter as
`TENDERMINT_SECRET_CONNECTION_KEY_AND_CHALLENGE_GEN`
- get 96 bytes of output from hkdf-sha256
- if we had the smaller ephemeral pubkey, use the first 32 bytes for the key for receiving, the second 32 bytes for sending; else the opposite
- use the last 32 bytes of output for the challenge
- create a hkdf-sha256 instance with the key being the diffie hellman shared secret, and info parameter as
`TENDERMINT_SECRET_CONNECTION_KEY_AND_CHALLENGE_GEN`
- get 96 bytes of output from hkdf-sha256
- if we had the smaller ephemeral pubkey, use the first 32 bytes for the key for receiving, the second 32 bytes for sending; else the opposite
- use the last 32 bytes of output for the challenge
- use a seperate nonce for receiving and sending. Both nonces start at 0, and should support the full 96 bit nonce range
- all communications from now on are encrypted in 1024 byte frames,
using the respective secret and nonce. Each nonce is incremented by one after each use.
using the respective secret and nonce. Each nonce is incremented by one after each use.
- we now have an encrypted channel, but still need to authenticate
- sign the common challenge obtained from the hkdf with our persistent private key
- send the amino encoded persistent pubkey and signature to the peer
- wait to receive the persistent public key and signature from the peer
- verify the signature on the challenge using the peer's persistent public key
If this is an outgoing connection (we dialed the peer) and we used a peer ID,
then finally verify that the peer's persistent public key corresponds to the peer ID we dialed,
ie. `peer.PubKey.Address() == <ID>`.
@ -69,7 +69,6 @@ an optional whitelist which can be managed through the ABCI app -
if the whitelist is enabled and the peer does not qualify, the connection is
terminated.
### Tendermint Version Handshake
The Tendermint Version Handshake allows the peers to exchange their NodeInfo:
@ -89,6 +88,7 @@ type NodeInfo struct {
```
The connection is disconnected if:
- `peer.NodeInfo.ID` is not equal `peerConn.ID`
- `peer.NodeInfo.Version` is not formatted as `X.X.X` where X are integers known as Major, Minor, and Revision
- `peer.NodeInfo.Version` Major is not the same as ours
@ -97,7 +97,6 @@ The connection is disconnected if:
- `peer.NodeInfo.ListenAddr` is malformed or is a DNS host that cannot be
resolved
At this point, if we have not disconnected, the peer is valid.
It is added to the switch and hence all reactors via the `AddPeer` method.
Note that each reactor may handle multiple channels.


+ 30
- 30
docs/spec/reactors/block_sync/impl.md View File

@ -1,46 +1,46 @@
## Blockchain Reactor
* coordinates the pool for syncing
* coordinates the store for persistence
* coordinates the playing of blocks towards the app using a sm.BlockExecutor
* handles switching between fastsync and consensus
* it is a p2p.BaseReactor
* starts the pool.Start() and its poolRoutine()
* registers all the concrete types and interfaces for serialisation
- coordinates the pool for syncing
- coordinates the store for persistence
- coordinates the playing of blocks towards the app using a sm.BlockExecutor
- handles switching between fastsync and consensus
- it is a p2p.BaseReactor
- starts the pool.Start() and its poolRoutine()
- registers all the concrete types and interfaces for serialisation
### poolRoutine
* listens to these channels:
* pool requests blocks from a specific peer by posting to requestsCh, block reactor then sends
- listens to these channels:
- pool requests blocks from a specific peer by posting to requestsCh, block reactor then sends
a &bcBlockRequestMessage for a specific height
* pool signals timeout of a specific peer by posting to timeoutsCh
* switchToConsensusTicker to periodically try and switch to consensus
* trySyncTicker to periodically check if we have fallen behind and then catch-up sync
* if there aren't any new blocks available on the pool it skips syncing
* tries to sync the app by taking downloaded blocks from the pool, gives them to the app and stores
- pool signals timeout of a specific peer by posting to timeoutsCh
- switchToConsensusTicker to periodically try and switch to consensus
- trySyncTicker to periodically check if we have fallen behind and then catch-up sync
- if there aren't any new blocks available on the pool it skips syncing
- tries to sync the app by taking downloaded blocks from the pool, gives them to the app and stores
them on disk
* implements Receive which is called by the switch/peer
* calls AddBlock on the pool when it receives a new block from a peer
- implements Receive which is called by the switch/peer
- calls AddBlock on the pool when it receives a new block from a peer
## Block Pool
* responsible for downloading blocks from peers
* makeRequestersRoutine()
* removes timeout peers
* starts new requesters by calling makeNextRequester()
* requestRoutine():
* picks a peer and sends the request, then blocks until:
* pool is stopped by listening to pool.Quit
* requester is stopped by listening to Quit
* request is redone
* we receive a block
* gotBlockCh is strange
- responsible for downloading blocks from peers
- makeRequestersRoutine()
- removes timeout peers
- starts new requesters by calling makeNextRequester()
- requestRoutine():
- picks a peer and sends the request, then blocks until:
- pool is stopped by listening to pool.Quit
- requester is stopped by listening to Quit
- request is redone
- we receive a block
- gotBlockCh is strange
## Block Store
* persists blocks to disk
- persists blocks to disk
# TODO
* How does the switch from bcR to conR happen? Does conR persist blocks to disk too?
* What is the interaction between the consensus and blockchain reactors?
- How does the switch from bcR to conR happen? Does conR persist blocks to disk too?
- What is the interaction between the consensus and blockchain reactors?

+ 145
- 147
docs/spec/reactors/block_sync/reactor.md View File

@ -46,11 +46,11 @@ type bcStatusResponseMessage struct {
## Architecture and algorithm
The Blockchain reactor is organised as a set of concurrent tasks:
- Receive routine of Blockchain Reactor
- Task for creating Requesters
- Set of Requesters tasks and
- Controller task.
The Blockchain reactor is organised as a set of concurrent tasks:
- Receive routine of Blockchain Reactor
- Task for creating Requesters
- Set of Requesters tasks and - Controller task.
![Blockchain Reactor Architecture Diagram](img/bc-reactor.png)
@ -58,41 +58,39 @@ The Blockchain reactor is organised as a set of concurrent tasks:
These are the core data structures necessarily to provide the Blockchain Reactor logic.
Requester data structure is used to track assignment of request for `block` at position `height` to a
peer with id equals to `peerID`.
Requester data structure is used to track assignment of request for `block` at position `height` to a peer with id equals to `peerID`.
```go
type Requester {
mtx Mutex
mtx Mutex
block Block
height int64
peerID p2p.ID
height int64

peerID p2p.ID
redoChannel chan struct{}
}
```
Pool is core data structure that stores last executed block (`height`), assignment of requests to peers (`requesters`),
current height for each peer and number of pending requests for each peer (`peers`), maximum peer height, etc.
Pool is core data structure that stores last executed block (`height`), assignment of requests to peers (`requesters`), current height for each peer and number of pending requests for each peer (`peers`), maximum peer height, etc.
```go
type Pool {
mtx Mutex
mtx Mutex
requesters map[int64]*Requester
height int64
height int64
peers map[p2p.ID]*Peer
maxPeerHeight int64 


numPending int32
maxPeerHeight int64
numPending int32
store BlockStore
requestsChannel chan<- BlockRequest
errorsChannel chan<- peerError
requestsChannel chan<- BlockRequest
errorsChannel chan<- peerError
}
```
Peer data structure stores for each peer current `height` and number of pending requests sent to
the peer (`numPending`), etc.
Peer data structure stores for each peer current `height` and number of pending requests sent to the peer (`numPending`), etc.
```go
type Peer struct {
id p2p.ID
id p2p.ID
height int64
numPending int32
timeout *time.Timer
@ -100,202 +98,202 @@ type Peer struct {
}
```
BlockRequest is internal data structure used to denote current mapping of request for a block at some `height` to
a peer (`PeerID`).
BlockRequest is internal data structure used to denote current mapping of request for a block at some `height` to a peer (`PeerID`).
```go
type BlockRequest {
Height int64
PeerID p2p.ID
PeerID p2p.ID
}
```
### Receive routine of Blockchain Reactor
It is executed upon message reception on the BlockchainChannel inside p2p receive routine. There is a separate p2p
receive routine (and therefore receive routine of the Blockchain Reactor) executed for each peer. Note that
try to send will not block (returns immediately) if outgoing buffer is full.
It is executed upon message reception on the BlockchainChannel inside p2p receive routine. There is a separate p2p receive routine (and therefore receive routine of the Blockchain Reactor) executed for each peer. Note that try to send will not block (returns immediately) if outgoing buffer is full.
```go
handleMsg(pool, m):
upon receiving bcBlockRequestMessage m from peer p:
block = load block for height m.Height from pool.store
if block != nil then
try to send BlockResponseMessage(block) to p
else
try to send bcNoBlockResponseMessage(m.Height) to p
upon receiving bcBlockResponseMessage m from peer p:
pool.mtx.Lock()
requester = pool.requesters[m.Height]
if requester == nil then
error("peer sent us a block we didn't expect")
continue
if requester.block == nil and requester.peerID == p then
block = load block for height m.Height from pool.store
if block != nil then
try to send BlockResponseMessage(block) to p
else
try to send bcNoBlockResponseMessage(m.Height) to p
upon receiving bcBlockResponseMessage m from peer p:
pool.mtx.Lock()
requester = pool.requesters[m.Height]
if requester == nil then
error("peer sent us a block we didn't expect")
continue
if requester.block == nil and requester.peerID == p then
requester.block = m
pool.numPending -= 1 // atomic decrement
peer = pool.peers[p]
if peer != nil then
peer.numPending--
if peer.numPending == 0 then
peer.timeout.Stop()
// NOTE: we don't send Quit signal to the corresponding requester task!
else
trigger peer timeout to expire after peerTimeout
pool.mtx.Unlock()
pool.numPending -= 1 // atomic decrement
peer = pool.peers[p]
if peer != nil then
peer.numPending--
if peer.numPending == 0 then
peer.timeout.Stop()
// NOTE: we don't send Quit signal to the corresponding requester task!
else
trigger peer timeout to expire after peerTimeout
pool.mtx.Unlock()
upon receiving bcStatusRequestMessage m from peer p:
try to send bcStatusResponseMessage(pool.store.Height)
try to send bcStatusResponseMessage(pool.store.Height)
upon receiving bcStatusResponseMessage m from peer p:
pool.mtx.Lock()
peer = pool.peers[p]
if peer != nil then
peer.height = m.height
else
peer = create new Peer data structure with id = p and height = m.Height
pool.peers[p] = peer
if m.Height > pool.maxPeerHeight then
pool.maxPeerHeight = m.Height
pool.mtx.Unlock()
pool.mtx.Lock()
peer = pool.peers[p]
if peer != nil then
peer.height = m.height
else
peer = create new Peer data structure with id = p and height = m.Height
pool.peers[p] = peer
if m.Height > pool.maxPeerHeight then
pool.maxPeerHeight = m.Height
pool.mtx.Unlock()
onTimeout(p):
send error message to pool error channel
peer = pool.peers[p]
peer.didTimeout = true
send error message to pool error channel
peer = pool.peers[p]
peer.didTimeout = true
```
### Requester tasks
Requester task is responsible for fetching a single block at position `height`.
Requester task is responsible for fetching a single block at position `height`.
```go
fetchBlock(height, pool):
while true do
peerID = nil
while true do
peerID = nil
block = nil
peer = pickAvailablePeer(height)
peerId = peer.id
peer = pickAvailablePeer(height)
peerId = peer.id
enqueue BlockRequest(height, peerID) to pool.requestsChannel
redo = false
while !redo do
select {
redo = false
while !redo do
select {
upon receiving Quit message do
return
upon receiving message on redoChannel do
mtx.Lock()
return
upon receiving message on redoChannel do
mtx.Lock()
pool.numPending++
redo = true
mtx.UnLock()
}
redo = true
mtx.UnLock()
}
pickAvailablePeer(height):
selectedPeer = nil
while selectedPeer = nil do
pool.mtx.Lock()
for each peer in pool.peers do
if !peer.didTimeout and peer.numPending < maxPendingRequestsPerPeer and peer.height >= height then
peer.numPending++
selectedPeer = peer
break
pool.mtx.Unlock()
if selectedPeer = nil then
sleep requestIntervalMS
return selectedPeer
selectedPeer = nil
while selectedPeer = nil do
pool.mtx.Lock()
for each peer in pool.peers do
if !peer.didTimeout and peer.numPending < maxPendingRequestsPerPeer and peer.height >= height then
peer.numPending++
selectedPeer = peer
break
pool.mtx.Unlock()
if selectedPeer = nil then
sleep requestIntervalMS
return selectedPeer
```
sleep for requestIntervalMS
### Task for creating Requesters
This task is responsible for continuously creating and starting Requester tasks.
```go
createRequesters(pool):
while true do
if !pool.isRunning then break
if pool.numPending < maxPendingRequests or size(pool.requesters) < maxTotalRequesters then
while true do
if !pool.isRunning then break
if pool.numPending < maxPendingRequests or size(pool.requesters) < maxTotalRequesters then
pool.mtx.Lock()
nextHeight = pool.height + size(pool.requesters)
requester = create new requester for height nextHeight
pool.requesters[nextHeight] = requester
pool.numPending += 1 // atomic increment
start requester task
pool.mtx.Unlock()
else
requester = create new requester for height nextHeight
pool.requesters[nextHeight] = requester
pool.numPending += 1 // atomic increment
start requester task
pool.mtx.Unlock()
else
sleep requestIntervalMS
pool.mtx.Lock()
for each peer in pool.peers do
if !peer.didTimeout && peer.numPending > 0 && peer.curRate < minRecvRate then
send error on pool error channel
pool.mtx.Lock()
for each peer in pool.peers do
if !peer.didTimeout && peer.numPending > 0 && peer.curRate < minRecvRate then
send error on pool error channel
peer.didTimeout = true
if peer.didTimeout then
for each requester in pool.requesters do
if requester.getPeerID() == peer then
if peer.didTimeout then
for each requester in pool.requesters do
if requester.getPeerID() == peer then
enqueue msg on requestor's redoChannel
delete(pool.peers, peerID)
pool.mtx.Unlock()
delete(pool.peers, peerID)
pool.mtx.Unlock()
```
### Main blockchain reactor controller task
### Main blockchain reactor controller task
```go
main(pool):
create trySyncTicker with interval trySyncIntervalMS
create statusUpdateTicker with interval statusUpdateIntervalSeconds
create switchToConsensusTicker with interbal switchToConsensusIntervalSeconds
while true do
select {
create trySyncTicker with interval trySyncIntervalMS
create statusUpdateTicker with interval statusUpdateIntervalSeconds
create switchToConsensusTicker with interbal switchToConsensusIntervalSeconds
while true do
select {
upon receiving BlockRequest(Height, Peer) on pool.requestsChannel:
try to send bcBlockRequestMessage(Height) to Peer
try to send bcBlockRequestMessage(Height) to Peer
upon receiving error(peer) on errorsChannel:
stop peer for error
stop peer for error
upon receiving message on statusUpdateTickerChannel:
broadcast bcStatusRequestMessage(bcR.store.Height) // message sent in a separate routine
broadcast bcStatusRequestMessage(bcR.store.Height) // message sent in a separate routine
upon receiving message on switchToConsensusTickerChannel:
pool.mtx.Lock()
receivedBlockOrTimedOut = pool.height > 0 || (time.Now() - pool.startTime) > 5 Seconds
ourChainIsLongestAmongPeers = pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
haveSomePeers = size of pool.peers > 0
pool.mtx.Lock()
receivedBlockOrTimedOut = pool.height > 0 || (time.Now() - pool.startTime) > 5 Seconds
ourChainIsLongestAmongPeers = pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
haveSomePeers = size of pool.peers > 0
pool.mtx.Unlock()
if haveSomePeers && receivedBlockOrTimedOut && ourChainIsLongestAmongPeers then
switch to consensus mode
switch to consensus mode
upon receiving message on trySyncTickerChannel:
for i = 0; i < 10; i++ do
pool.mtx.Lock()
for i = 0; i < 10; i++ do
pool.mtx.Lock()
firstBlock = pool.requesters[pool.height].block
secondBlock = pool.requesters[pool.height].block
if firstBlock == nil or secondBlock == nil then continue
pool.mtx.Unlock()
verify firstBlock using LastCommit from secondBlock
if verification failed
pool.mtx.Lock()
verify firstBlock using LastCommit from secondBlock
if verification failed
pool.mtx.Lock()
peerID = pool.requesters[pool.height].peerID
redoRequestsForPeer(peerId)
delete(pool.peers, peerID)
stop peer peerID for error
pool.mtx.Unlock()
else
stop peer peerID for error
pool.mtx.Unlock()
else
delete(pool.requesters, pool.height)
save firstBlock to store
pool.height++
execute firstBlock
pool.height++
execute firstBlock
}
redoRequestsForPeer(pool, peerId):
for each requester in pool.requesters do
if requester.getPeerID() == peerID
enqueue msg on redoChannel for requester
for each requester in pool.requesters do
if requester.getPeerID() == peerID
enqueue msg on redoChannel for requester
```
## Channels
Defines `maxMsgSize` for the maximum size of incoming messages,


+ 105
- 106
docs/spec/reactors/consensus/consensus-reactor.md View File

@ -1,49 +1,48 @@
# Consensus Reactor
Consensus Reactor defines a reactor for the consensus service. It contains the ConsensusState service that
manages the state of the Tendermint consensus internal state machine.
When Consensus Reactor is started, it starts Broadcast Routine which starts ConsensusState service.
Furthermore, for each peer that is added to the Consensus Reactor, it creates (and manages) the known peer state
(that is used extensively in gossip routines) and starts the following three routines for the peer p:
Gossip Data Routine, Gossip Votes Routine and QueryMaj23Routine. Finally, Consensus Reactor is responsible
Consensus Reactor defines a reactor for the consensus service. It contains the ConsensusState service that
manages the state of the Tendermint consensus internal state machine.
When Consensus Reactor is started, it starts Broadcast Routine which starts ConsensusState service.
Furthermore, for each peer that is added to the Consensus Reactor, it creates (and manages) the known peer state
(that is used extensively in gossip routines) and starts the following three routines for the peer p:
Gossip Data Routine, Gossip Votes Routine and QueryMaj23Routine. Finally, Consensus Reactor is responsible
for decoding messages received from a peer and for adequate processing of the message depending on its type and content.
The processing normally consists of updating the known peer state and for some messages
(`ProposalMessage`, `BlockPartMessage` and `VoteMessage`) also forwarding message to ConsensusState module
for further processing. In the following text we specify the core functionality of those separate unit of executions
that are part of the Consensus Reactor.
The processing normally consists of updating the known peer state and for some messages
(`ProposalMessage`, `BlockPartMessage` and `VoteMessage`) also forwarding message to ConsensusState module
for further processing. In the following text we specify the core functionality of those separate unit of executions
that are part of the Consensus Reactor.
## ConsensusState service
Consensus State handles execution of the Tendermint BFT consensus algorithm. It processes votes and proposals,
Consensus State handles execution of the Tendermint BFT consensus algorithm. It processes votes and proposals,
and upon reaching agreement, commits blocks to the chain and executes them against the application.
The internal state machine receives input from peers, the internal validator and from a timer.
Inside Consensus State we have the following units of execution: Timeout Ticker and Receive Routine.
Timeout Ticker is a timer that schedules timeouts conditional on the height/round/step that are processed
by the Receive Routine.
Timeout Ticker is a timer that schedules timeouts conditional on the height/round/step that are processed
by the Receive Routine.
### Receive Routine of the ConsensusState service
Receive Routine of the ConsensusState handles messages which may cause internal consensus state transitions.
It is the only routine that updates RoundState that contains internal consensus state.
Updates (state transitions) happen on timeouts, complete proposals, and 2/3 majorities.
It receives messages from peers, internal validators and from Timeout Ticker
and invokes the corresponding handlers, potentially updating the RoundState.
The details of the protocol (together with formal proofs of correctness) implemented by the Receive Routine are
It is the only routine that updates RoundState that contains internal consensus state.
Updates (state transitions) happen on timeouts, complete proposals, and 2/3 majorities.
It receives messages from peers, internal validators and from Timeout Ticker
and invokes the corresponding handlers, potentially updating the RoundState.
The details of the protocol (together with formal proofs of correctness) implemented by the Receive Routine are
discussed in separate document. For understanding of this document
it is sufficient to understand that the Receive Routine manages and updates RoundState data structure that is
it is sufficient to understand that the Receive Routine manages and updates RoundState data structure that is
then extensively used by the gossip routines to determine what information should be sent to peer processes.
## Round State
RoundState defines the internal consensus state. It contains height, round, round step, a current validator set,
a proposal and proposal block for the current round, locked round and block (if some block is being locked), set of
received votes and last commit and last validators set.
a proposal and proposal block for the current round, locked round and block (if some block is being locked), set of
received votes and last commit and last validators set.
```golang
type RoundState struct {
Height int64
Height int64
Round int
Step RoundStepType
Validators ValidatorSet
@ -54,10 +53,10 @@ type RoundState struct {
LockedBlock Block
LockedBlockParts PartSet
Votes HeightVoteSet
LastCommit VoteSet
LastCommit VoteSet
LastValidators ValidatorSet
}
```
}
```
Internally, consensus will run as a state machine with the following states:
@ -82,8 +81,8 @@ type PeerRoundState struct {
Round int // Round peer is at, -1 if unknown.
Step RoundStepType // Step peer is at
Proposal bool // True if peer has proposal for this round
ProposalBlockPartsHeader PartSetHeader
ProposalBlockParts BitArray
ProposalBlockPartsHeader PartSetHeader
ProposalBlockParts BitArray
ProposalPOLRound int // Proposal's POL round. -1 if none.
ProposalPOL BitArray // nil until ProposalPOLMessage received.
Prevotes BitArray // All votes peer has for this round
@ -93,19 +92,19 @@ type PeerRoundState struct {
CatchupCommitRound int // Round that we have commit for. Not necessarily unique. -1 if none.
CatchupCommit BitArray // All commit precommits peer has for this height & CatchupCommitRound
}
```
```
## Receive method of Consensus reactor
The entry point of the Consensus reactor is a receive method. When a message is received from a peer p,
normally the peer round state is updated correspondingly, and some messages
The entry point of the Consensus reactor is a receive method. When a message is received from a peer p,
normally the peer round state is updated correspondingly, and some messages
are passed for further processing, for example to ConsensusState service. We now specify the processing of messages
in the receive method of Consensus reactor for each message type. In the following message handler, `rs` and `prs` denote
`RoundState` and `PeerRoundState`, respectively.
### NewRoundStepMessage handler
### NewRoundStepMessage handler
```
```
handleMessage(msg):
if msg is from smaller height/round/step then return
// Just remember these values.
@ -116,10 +115,10 @@ handleMessage(msg):
Update prs with values from msg
if prs.Height or prs.Round has been updated then
reset Proposal related fields of the peer state
reset Proposal related fields of the peer state
if prs.Round has been updated and msg.Round == prsCatchupCommitRound then
prs.Precommits = psCatchupCommit
if prs.Height has been updated then
if prs.Height has been updated then
if prsHeight+1 == msg.Height && prsRound == msg.LastCommitRound then
prs.LastCommitRound = msg.LastCommitRound
prs.LastCommit = prs.Precommits
@ -128,111 +127,111 @@ handleMessage(msg):
prs.LastCommit = nil
}
Reset prs.CatchupCommitRound and prs.CatchupCommit
```
```
### CommitStepMessage handler
```
```
handleMessage(msg):
if prs.Height == msg.Height then
if prs.Height == msg.Height then
prs.ProposalBlockPartsHeader = msg.BlockPartsHeader
prs.ProposalBlockParts = msg.BlockParts
```
```
### HasVoteMessage handler
```
```
handleMessage(msg):
if prs.Height == msg.Height then
if prs.Height == msg.Height then
prs.setHasVote(msg.Height, msg.Round, msg.Type, msg.Index)
```
```
### VoteSetMaj23Message handler
```
```
handleMessage(msg):
if prs.Height == msg.Height then
Record in rs that a peer claim to have ⅔ majority for msg.BlockID
Send VoteSetBitsMessage showing votes node has for that BlockId
```
Send VoteSetBitsMessage showing votes node has for that BlockId
```
### ProposalMessage handler
```
handleMessage(msg):
if prs.Height != msg.Height || prs.Round != msg.Round || prs.Proposal then return
if prs.Height != msg.Height || prs.Round != msg.Round || prs.Proposal then return
prs.Proposal = true
prs.ProposalBlockPartsHeader = msg.BlockPartsHeader
prs.ProposalBlockParts = empty set
prs.ProposalBlockParts = empty set
prs.ProposalPOLRound = msg.POLRound
prs.ProposalPOL = nil
prs.ProposalPOL = nil
Send msg through internal peerMsgQueue to ConsensusState service
```
```
### ProposalPOLMessage handler
```
```
handleMessage(msg):
if prs.Height != msg.Height or prs.ProposalPOLRound != msg.ProposalPOLRound then return
prs.ProposalPOL = msg.ProposalPOL
```
```
### BlockPartMessage handler
```
```
handleMessage(msg):
if prs.Height != msg.Height || prs.Round != msg.Round then return
Record in prs that peer has block part msg.Part.Index
Record in prs that peer has block part msg.Part.Index
Send msg trough internal peerMsgQueue to ConsensusState service
```
```
### VoteMessage handler
```
```
handleMessage(msg):
Record in prs that a peer knows vote with index msg.vote.ValidatorIndex for particular height and round
Send msg trough internal peerMsgQueue to ConsensusState service
```
```
### VoteSetBitsMessage handler
```
```
handleMessage(msg):
Update prs for the bit-array of votes peer claims to have for the msg.BlockID
```
```
## Gossip Data Routine
It is used to send the following messages to the peer: `BlockPartMessage`, `ProposalMessage` and
`ProposalPOLMessage` on the DataChannel. The gossip data routine is based on the local RoundState (`rs`)
It is used to send the following messages to the peer: `BlockPartMessage`, `ProposalMessage` and
`ProposalPOLMessage` on the DataChannel. The gossip data routine is based on the local RoundState (`rs`)
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
```
1a) if rs.ProposalBlockPartsHeader == prs.ProposalBlockPartsHeader and the peer does not have all the proposal parts then
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(rs.Height, rs.Round, Part) to the peer on the DataChannel
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(rs.Height, rs.Round, Part) to the peer on the DataChannel
if send returns true, record that the peer knows the corresponding block Part
Continue
Continue
1b) if (0 < prs.Height) and (prs.Height < rs.Height) then
help peer catch up using gossipDataForCatchup function
Continue
1c) if (rs.Height != prs.Height) or (rs.Round != prs.Round) then
1c) if (rs.Height != prs.Height) or (rs.Round != prs.Round) then
Sleep PeerGossipSleepDuration
Continue
Continue
// at this point rs.Height == prs.Height and rs.Round == prs.Round
1d) if (rs.Proposal != nil and !prs.Proposal) then
1d) if (rs.Proposal != nil and !prs.Proposal) then
Send ProposalMessage(rs.Proposal) to the peer
if send returns true, record that the peer knows Proposal
if 0 <= rs.Proposal.POLRound then
polRound = rs.Proposal.POLRound
prevotesBitArray = rs.Votes.Prevotes(polRound).BitArray()
polRound = rs.Proposal.POLRound
prevotesBitArray = rs.Votes.Prevotes(polRound).BitArray()
Send ProposalPOLMessage(rs.Height, polRound, prevotesBitArray)
Continue
Continue
2) Sleep PeerGossipSleepDuration
2) Sleep PeerGossipSleepDuration
```
### Gossip Data For Catchup
@ -240,65 +239,65 @@ and the known PeerRoundState (`prs`). The routine repeats forever the logic show
This function is responsible for helping peer catch up if it is at the smaller height (prs.Height < rs.Height).
The function executes the following logic:
if peer does not have all block parts for prs.ProposalBlockPart then
if peer does not have all block parts for prs.ProposalBlockPart then
blockMeta = Load Block Metadata for height prs.Height from blockStore
if (!blockMeta.BlockID.PartsHeader == prs.ProposalBlockPartsHeader) then
Sleep PeerGossipSleepDuration
return
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(prs.Height, prs.Round, Part) to the peer on the DataChannel
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(prs.Height, prs.Round, Part) to the peer on the DataChannel
if send returns true, record that the peer knows the corresponding block Part
return
else Sleep PeerGossipSleepDuration
else Sleep PeerGossipSleepDuration
## Gossip Votes Routine
It is used to send the following message: `VoteMessage` on the VoteChannel.
The gossip votes routine is based on the local RoundState (`rs`)
The gossip votes routine is based on the local RoundState (`rs`)
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
```
1a) if rs.Height == prs.Height then
if prs.Step == RoundStepNewHeight then
vote = random vote from rs.LastCommit the peer does not have
Send VoteMessage(vote) to the peer
if prs.Step == RoundStepNewHeight then
vote = random vote from rs.LastCommit the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
if prs.Step <= RoundStepPrevote and prs.Round != -1 and prs.Round <= rs.Round then
if prs.Step <= RoundStepPrevote and prs.Round != -1 and prs.Round <= rs.Round then
Prevotes = rs.Votes.Prevotes(prs.Round)
vote = random vote from Prevotes the peer does not have
Send VoteMessage(vote) to the peer
vote = random vote from Prevotes the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
if prs.Step <= RoundStepPrecommit and prs.Round != -1 and prs.Round <= rs.Round then
Precommits = rs.Votes.Precommits(prs.Round)
vote = random vote from Precommits the peer does not have
Send VoteMessage(vote) to the peer
if prs.Step <= RoundStepPrecommit and prs.Round != -1 and prs.Round <= rs.Round then
Precommits = rs.Votes.Precommits(prs.Round)
vote = random vote from Precommits the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
if prs.ProposalPOLRound != -1 then
if prs.ProposalPOLRound != -1 then
PolPrevotes = rs.Votes.Prevotes(prs.ProposalPOLRound)
vote = random vote from PolPrevotes the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
vote = random vote from PolPrevotes the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
1b) if prs.Height != 0 and rs.Height == prs.Height+1 then
vote = random vote from rs.LastCommit peer does not have
Send VoteMessage(vote) to the peer
vote = random vote from rs.LastCommit peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
1c) if prs.Height != 0 and rs.Height >= prs.Height+2 then
Commit = get commit from BlockStore for prs.Height
vote = random vote from Commit the peer does not have
Send VoteMessage(vote) to the peer
Commit = get commit from BlockStore for prs.Height
vote = random vote from Commit the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
2) Sleep PeerGossipSleepDuration
2) Sleep PeerGossipSleepDuration
```
## QueryMaj23Routine
It is used to send the following message: `VoteSetMaj23Message`. `VoteSetMaj23Message` is sent to indicate that a given
It is used to send the following message: `VoteSetMaj23Message`. `VoteSetMaj23Message` is sent to indicate that a given
BlockID has seen +2/3 votes. This routine is based on the local RoundState (`rs`) and the known PeerRoundState
(`prs`). The routine repeats forever the logic shown below.
@ -324,8 +323,8 @@ BlockID has seen +2/3 votes. This routine is based on the local RoundState (`rs`
Send m to peer
Sleep PeerQueryMaj23SleepDuration
1d) if prs.CatchupCommitRound != -1 and 0 < prs.Height and
prs.Height <= blockStore.Height() then
1d) if prs.CatchupCommitRound != -1 and 0 < prs.Height and
prs.Height <= blockStore.Height() then
Commit = LoadCommit(prs.Height)
m = VoteSetMaj23Message(prs.Height,Commit.Round,Precommit,Commit.blockId)
Send m to peer
@ -339,14 +338,14 @@ BlockID has seen +2/3 votes. This routine is based on the local RoundState (`rs`
The Broadcast routine subscribes to an internal event bus to receive new round steps, votes messages and proposal
heartbeat messages, and broadcasts messages to peers upon receiving those events.
It broadcasts `NewRoundStepMessage` or `CommitStepMessage` upon new round state event. Note that
broadcasting these messages does not depend on the PeerRoundState; it is sent on the StateChannel.
Upon receiving VoteMessage it broadcasts `HasVoteMessage` message to its peers on the StateChannel.
broadcasting these messages does not depend on the PeerRoundState; it is sent on the StateChannel.
Upon receiving VoteMessage it broadcasts `HasVoteMessage` message to its peers on the StateChannel.
`ProposalHeartbeatMessage` is sent the same way on the StateChannel.
## Channels
Defines 4 channels: state, data, vote and vote_set_bits. Each channel
has `SendQueueCapacity` and `RecvBufferCapacity` and
has `SendQueueCapacity` and `RecvBufferCapacity` and
`RecvMessageCapacity` set to `maxMsgSize`.
Sending incorrectly encoded data will result in stopping the peer.

+ 1
- 1
docs/spec/reactors/consensus/consensus.md View File

@ -23,7 +23,7 @@ processes using `BlockPartMessage`.
Validators in Tendermint communicate by peer-to-peer gossiping protocol. Each validator is connected
only to a subset of processes called peers. By the gossiping protocol, a validator send to its peers
all needed information (`ProposalMessage`, `VoteMessage` and `BlockPartMessage`) so they can
all needed information (`ProposalMessage`, `VoteMessage` and `BlockPartMessage`) so they can
reach agreement on some block, and also obtain the content of the chosen block (block parts). As
part of the gossiping protocol, processes also send auxiliary messages that inform peers about the
executed steps of the core consensus algorithm (`NewRoundStepMessage` and `CommitStepMessage`), and


+ 12
- 12
docs/spec/reactors/consensus/proposer-selection.md View File

@ -1,6 +1,6 @@
# Proposer selection procedure in Tendermint
This document specifies the Proposer Selection Procedure that is used in Tendermint to choose a round proposer.
This document specifies the Proposer Selection Procedure that is used in Tendermint to choose a round proposer.
As Tendermint is “leader-based protocol”, the proposer selection is critical for its correct functioning.
Let denote with `proposer_p(h,r)` a process returned by the Proposer Selection Procedure at the process p, at height h
and round r. Then the Proposer Selection procedure should fulfill the following properties:
@ -9,13 +9,13 @@ and round r. Then the Proposer Selection procedure should fulfill the following
p and q, for each height h, and each round r,
proposer_p(h,r) = proposer_q(h,r)
`Liveness`: In every consecutive sequence of rounds of size K (K is system parameter), at least a
single round has an honest proposer.
`Liveness`: In every consecutive sequence of rounds of size K (K is system parameter), at least a
single round has an honest proposer.
`Fairness`: The proposer selection is proportional to the validator voting power, i.e., a validator with more
voting power is selected more frequently, proportional to its power. More precisely, given a set of processes
with the total voting power N, during a sequence of rounds of size N, every process is proposer in a number of rounds
equal to its voting power.
`Fairness`: The proposer selection is proportional to the validator voting power, i.e., a validator with more
voting power is selected more frequently, proportional to its power. More precisely, given a set of processes
with the total voting power N, during a sequence of rounds of size N, every process is proposer in a number of rounds
equal to its voting power.
We now look at a few particular cases to understand better how fairness should be implemented.
If we have 4 processes with the following voting power distribution (p0,4), (p1, 2), (p2, 2), (p3, 2) at some round r,
@ -27,20 +27,20 @@ Let consider now the following scenario where a total voting power of faulty pro
p0: (p0,3), (p1, 1), (p2, 1), (p3, 1), (p4, 1), (p5, 1), (p6, 1), (p7, 1).
In this case the sequence of proposer selections looks like this:
`p0, p1, p2, p3, p0, p4, p5, p6, p7, p0, p0, p1, p2, p3, p0, p4, p5, p6, p7, p0, etc`
`p0, p1, p2, p3, p0, p4, p5, p6, p7, p0, p0, p1, p2, p3, p0, p4, p5, p6, p7, p0, etc`
In this case, we see that a number of rounds coordinated by a faulty process is proportional to its voting power.
We consider also the case where we have voting power uniformly distributed among processes, i.e., we have 10 processes
each with voting power of 1. And let consider that there are 3 faulty processes with consecutive addresses,
We consider also the case where we have voting power uniformly distributed among processes, i.e., we have 10 processes
each with voting power of 1. And let consider that there are 3 faulty processes with consecutive addresses,
for example the first 3 processes are faulty. Then the sequence looks like this:
`p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, etc`
In this case, we have 3 consecutive rounds with a faulty proposer.
In this case, we have 3 consecutive rounds with a faulty proposer.
One special case we consider is the case where a single honest process p0 has most of the voting power, for example:
(p0,100), (p1, 2), (p2, 3), (p3, 4). Then the sequence of proposer selection looks like this:
p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p0, p1, p0, p0, p0, p0, p0, etc
This basically means that almost all rounds have the same proposer. But in this case, the process p0 has anyway enough
This basically means that almost all rounds have the same proposer. But in this case, the process p0 has anyway enough
voting power to decide whatever he wants, so the fact that he coordinates almost all rounds seems correct.

+ 4
- 4
docs/spec/reactors/mempool/concurrency.md View File

@ -2,7 +2,7 @@
Look at the concurrency model this uses...
* Receiving CheckTx
* Broadcasting new tx
* Interfaces with consensus engine, reap/update while checking
* Calling the ABCI app (ordering. callbacks. how proxy works alongside the blockchain proxy which actually writes blocks)
- Receiving CheckTx
- Broadcasting new tx
- Interfaces with consensus engine, reap/update while checking
- Calling the ABCI app (ordering. callbacks. how proxy works alongside the blockchain proxy which actually writes blocks)

+ 1
- 1
docs/spec/reactors/mempool/config.md View File

@ -11,12 +11,12 @@ Flag: `--mempool.recheck_empty=false`
Environment: `TM_MEMPOOL_RECHECK_EMPTY=false`
Config:
```
[mempool]
recheck_empty = false
```
## Recheck
`--mempool.recheck=false` (default: true)


+ 7
- 8
docs/spec/reactors/mempool/functionality.md View File

@ -6,26 +6,25 @@ consensus reactor when it is selected as the block proposer.
There are two sides to the mempool state:
* External: get, check, and broadcast new transactions
* Internal: return valid transaction, update list after block commit
- External: get, check, and broadcast new transactions
- Internal: return valid transaction, update list after block commit
## External functionality
External functionality is exposed via network interfaces
to potentially untrusted actors.
* CheckTx - triggered via RPC or P2P
* Broadcast - gossip messages after a successful check
- CheckTx - triggered via RPC or P2P
- Broadcast - gossip messages after a successful check
## Internal functionality
Internal functionality is exposed via method calls to other
code compiled into the tendermint binary.
* Reap - get tx to propose in next block
* Update - remove tx that were included in last block
* ABCI.CheckTx - call ABCI app to validate the tx
- Reap - get tx to propose in next block
- Update - remove tx that were included in last block
- ABCI.CheckTx - call ABCI app to validate the tx
What does it provide the consensus reactor?
What guarantees does it need from the ABCI app?


+ 14
- 14
docs/spec/reactors/mempool/messages.md View File

@ -35,12 +35,12 @@ Request (`POST http://gaia.zone:26657/`):
```json
{
"id": "",
"jsonrpc": "2.0",
"method": "broadcast_sync",
"params": {
"id": "",
"jsonrpc": "2.0",
"method": "broadcast_sync",
"params": {
"tx": "F012A4BC68..."
}
}
}
```
@ -48,14 +48,14 @@ Response:
```json
{
"error": "",
"result": {
"hash": "E39AAB7A537ABAA237831742DCE1117F187C3C52",
"log": "",
"data": "",
"code": 0
},
"id": "",
"jsonrpc": "2.0"
"error": "",
"result": {
"hash": "E39AAB7A537ABAA237831742DCE1117F187C3C52",
"log": "",
"data": "",
"code": 0
},
"id": "",
"jsonrpc": "2.0"
}
```

+ 1
- 1
docs/spec/reactors/pex/pex.md View File

@ -95,6 +95,7 @@ remove from address book completely.
## Select Peers to Exchange
When we’re asked for peers, we select them as follows:
- select at most `maxGetSelection` peers
- try to select at least `minGetSelection` peers - if we have less than that, select them all.
- select a random, unbiased `getSelectionPercent` of the peers
@ -126,4 +127,3 @@ to use it in the PEX.
See the [trustmetric](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-006-trust-metric.md)
and [trustmetric useage](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-007-trust-metric-usage.md)
architecture docs for more details.

+ 26
- 29
docs/spec/software/abci.md View File

@ -44,7 +44,6 @@ Thus, during Commit, it is safe to reset the QueryState and the CheckTxState to
Note, however, that it is not possible to send transactions to Tendermint during Commit - if your app
tries to send a `/broadcast_tx` to Tendermint during Commit, it will deadlock.
## EndBlock Validator Updates
Updates to the Tendermint validator set can be made by returning `Validator`
@ -60,12 +59,12 @@ message PubKey {
string type
bytes data
}
```
The `pub_key` currently supports two types:
- `type = "ed25519" and `data = <raw 32-byte public key>`
- `type = "secp256k1" and `data = <33-byte OpenSSL compressed public key>`
- `type = "ed25519" and`data = <raw 32-byte public key>`
- `type = "secp256k1" and `data = <33-byte OpenSSL compressed public key>`
If the address is provided, it must match the address of the pubkey, as
specified [here](/docs/spec/blockchain/encoding.md#Addresses)
@ -87,9 +86,9 @@ following rules:
- if power is 0, the validator must already exist, and will be removed from the
validator set
- if power is non-0:
- if the validator does not already exist, it will be added to the validator
set with the given power
- if the validator does already exist, its power will be adjusted to the given power
- if the validator does not already exist, it will be added to the validator
set with the given power
- if the validator does already exist, its power will be adjusted to the given power
## InitChain Validator Updates
@ -114,10 +113,10 @@ features. These are:
When Tendermint connects to a peer, it sends two queries to the ABCI application
using the following paths, with no additional data:
- `/p2p/filter/addr/<IP:PORT>`, where `<IP:PORT>` denote the IP address and
the port of the connection
- `p2p/filter/id/<ID>`, where `<ID>` is the peer node ID (ie. the
pubkey.Address() for the peer's PubKey)
- `/p2p/filter/addr/<IP:PORT>`, where `<IP:PORT>` denote the IP address and
the port of the connection
- `p2p/filter/id/<ID>`, where `<ID>` is the peer node ID (ie. the
pubkey.Address() for the peer's PubKey)
If either of these queries return a non-zero ABCI code, Tendermint will refuse
to connect to the peer.
@ -128,11 +127,9 @@ On startup, Tendermint calls Info on the Query connection to get the latest
committed state of the app. The app MUST return information consistent with the
last block it succesfully completed Commit for.
If the app succesfully committed block H but not H+1, then `last_block_height =
H` and `last_block_app_hash = <hash returned by Commit for block H>`. If the app
If the app succesfully committed block H but not H+1, then `last_block_height = H` and `last_block_app_hash = <hash returned by Commit for block H>`. If the app
failed during the Commit of block H, then `last_block_height = H-1` and
`last_block_app_hash = <hash returned by Commit for block H-1, which is the hash
in the header of block H>`.
`last_block_app_hash = <hash returned by Commit for block H-1, which is the hash in the header of block H>`.
We now distinguish three heights, and describe how Tendermint syncs itself with
the app.
@ -165,24 +162,24 @@ If `storeBlockHeight > stateBlockHeight+1`, panic
Now, the meat:
If `storeBlockHeight == stateBlockHeight && appBlockHeight < storeBlockHeight`,
replay all blocks in full from `appBlockHeight` to `storeBlockHeight`.
This happens if we completed processing the block, but the app forgot its height.
replay all blocks in full from `appBlockHeight` to `storeBlockHeight`.
This happens if we completed processing the block, but the app forgot its height.
If `storeBlockHeight == stateBlockHeight && appBlockHeight == storeBlockHeight`, we're done
This happens if we crashed at an opportune spot.
This happens if we crashed at an opportune spot.
If `storeBlockHeight == stateBlockHeight+1`
This happens if we started processing the block but didn't finish.
This happens if we started processing the block but didn't finish.
If `appBlockHeight < stateBlockHeight`
replay all blocks in full from `appBlockHeight` to `storeBlockHeight-1`,
and replay the block at `storeBlockHeight` using the WAL.
This happens if the app forgot the last block it committed.
If `appBlockHeight < stateBlockHeight`
replay all blocks in full from `appBlockHeight` to `storeBlockHeight-1`,
and replay the block at `storeBlockHeight` using the WAL.
This happens if the app forgot the last block it committed.
If `appBlockHeight == stateBlockHeight`,
replay the last block (storeBlockHeight) in full.
This happens if we crashed before the app finished Commit
If `appBlockHeight == stateBlockHeight`,
replay the last block (storeBlockHeight) in full.
This happens if we crashed before the app finished Commit
If appBlockHeight == storeBlockHeight {
update the state using the saved ABCI responses but dont run the block against the real app.
This happens if we crashed after the app finished Commit but before Tendermint saved the state.
If appBlockHeight == storeBlockHeight {
update the state using the saved ABCI responses but dont run the block against the real app.
This happens if we crashed after the app finished Commit but before Tendermint saved the state.

+ 6
- 0
docs/stop-words.txt View File

@ -0,0 +1,6 @@
investor
invest
investing
token distribution
atom distribution
distribution of atoms

+ 7
- 7
docs/tendermint-core/block-structure.md View File

@ -13,11 +13,11 @@ A
[Block](https://godoc.org/github.com/tendermint/tendermint/types#Block)
contains:
- a [Header](#header) contains merkle hashes for various chain states
- the
[Data](https://godoc.org/github.com/tendermint/tendermint/types#Data)
is all transactions which are to be processed
- the [LastCommit](#commit) &gt; 2/3 signatures for the last block
- a [Header](#header) contains merkle hashes for various chain states
- the
[Data](https://godoc.org/github.com/tendermint/tendermint/types#Data)
is all transactions which are to be processed
- the [LastCommit](#commit) &gt; 2/3 signatures for the last block
The signatures returned along with block `H` are those validating block
`H-1`. This can be a little confusing, but we must also consider that
@ -66,7 +66,7 @@ effects of running that transaction will be first visible in the
`AppHash` from the block header at height `H+1`.
Like the `LastCommit` issue, this is a requirement of the immutability
of the block chain, as the application only applies transactions *after*
of the block chain, as the application only applies transactions _after_
they are commited to the chain.
## Commit
@ -90,7 +90,7 @@ you look at the code, you will notice that we need to provide the
`chainID` of the blockchain in order to properly calculate the votes.
This is to protect anyone from swapping votes between chains to fake (or
frame) a validator. Also note that this `chainID` is in the
`genesis.json` from *Tendermint*, not the `genesis.json` from the
`genesis.json` from _Tendermint_, not the `genesis.json` from the
basecoin app ([that is a different
chainID...](https://github.com/cosmos/cosmos-sdk/issues/32)).


+ 10
- 10
docs/tendermint-core/light-client-protocol.md View File

@ -18,13 +18,13 @@ proofs](./merkle.md#iavl-tree).
## Properties
- You get the full collateralized security benefits of Tendermint; No
need to wait for confirmations.
- You get the full speed benefits of Tendermint; transactions
commit instantly.
- You can get the most recent version of the application state
non-interactively (without committing anything to the blockchain).
For example, this means that you can get the most recent value of a
name from the name-registry without worrying about fork censorship
attacks, without posting a commit and waiting for confirmations.
It's fast, secure, and free!
- You get the full collateralized security benefits of Tendermint; No
need to wait for confirmations.
- You get the full speed benefits of Tendermint; transactions
commit instantly.
- You can get the most recent version of the application state
non-interactively (without committing anything to the blockchain).
For example, this means that you can get the most recent value of a
name from the name-registry without worrying about fork censorship
attacks, without posting a commit and waiting for confirmations.
It's fast, secure, and free!

+ 13
- 12
docs/tendermint-core/running-in-production.md View File

@ -135,33 +135,34 @@ Tendermint, replay will fail with panic.
Recovering from data corruption can be hard and time-consuming. Here are two approaches you can take:
1) Delete the WAL file and restart Tendermint. It will attempt to sync with other peers.
2) Try to repair the WAL file manually:
1. Delete the WAL file and restart Tendermint. It will attempt to sync with other peers.
2. Try to repair the WAL file manually:
1. Create a backup of the corrupted WAL file:
1) Create a backup of the corrupted WAL file:
```
cp "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal_backup
```
2. Use `./scripts/wal2json` to create a human-readable version
2. Use `./scripts/wal2json` to create a human-readable version
```
./scripts/wal2json/wal2json "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal
```
3. Search for a "CORRUPTED MESSAGE" line.
4. By looking at the previous message and the message after the corrupted one
and looking at the logs, try to rebuild the message. If the consequent
messages are marked as corrupted too (this may happen if length header
got corrupted or some writes did not make it to the WAL ~ truncation),
then remove all the lines starting from the corrupted one and restart
Tendermint.
3. Search for a "CORRUPTED MESSAGE" line.
4. By looking at the previous message and the message after the corrupted one
and looking at the logs, try to rebuild the message. If the consequent
messages are marked as corrupted too (this may happen if length header
got corrupted or some writes did not make it to the WAL ~ truncation),
then remove all the lines starting from the corrupted one and restart
Tendermint.
```
$EDITOR /tmp/corrupted_wal
```
5. After editing, convert this file back into binary form by running:
5. After editing, convert this file back into binary form by running:
```
./scripts/json2wal/json2wal /tmp/corrupted_wal $TMHOME/data/cs.wal/wal


+ 6
- 6
docs/tendermint-core/secure-p2p.md View File

@ -61,9 +61,9 @@ Authenticated encryption is enabled by default.
## Additional Reading
- [Implementation](https://github.com/tendermint/tendermint/blob/64bae01d007b5bee0d0827ab53259ffd5910b4e6/p2p/conn/secret_connection.go#L47)
- [Original STS paper by Whitfield Diffie, Paul C. van Oorschot and
Michael J.
Wiener](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.216.6107&rep=rep1&type=pdf)
- [Further work on secret
handshakes](https://dominictarr.github.io/secret-handshake-paper/shs.pdf)
- [Implementation](https://github.com/tendermint/tendermint/blob/64bae01d007b5bee0d0827ab53259ffd5910b4e6/p2p/conn/secret_connection.go#L47)
- [Original STS paper by Whitfield Diffie, Paul C. van Oorschot and
Michael J.
Wiener](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.216.6107&rep=rep1&type=pdf)
- [Further work on secret
handshakes](https://dominictarr.github.io/secret-handshake-paper/shs.pdf)

+ 14
- 14
docs/tendermint-core/using-tendermint.md View File

@ -39,20 +39,20 @@ definition](https://github.com/tendermint/tendermint/blob/master/types/genesis.g
#### Fields
- `genesis_time`: Official time of blockchain start.
- `chain_id`: ID of the blockchain. This must be unique for
every blockchain. If your testnet blockchains do not have unique
chain IDs, you will have a bad time.
- `validators`:
- `pub_key`: The first element specifies the `pub_key` type. 1
== Ed25519. The second element are the pubkey bytes.
- `power`: The validator's voting power.
- `name`: Name of the validator (optional).
- `app_hash`: The expected application hash (as returned by the
`ResponseInfo` ABCI message) upon genesis. If the app's hash does
not match, Tendermint will panic.
- `app_state`: The application state (e.g. initial distribution
of tokens).
- `genesis_time`: Official time of blockchain start.
- `chain_id`: ID of the blockchain. This must be unique for
every blockchain. If your testnet blockchains do not have unique
chain IDs, you will have a bad time.
- `validators`:
- `pub_key`: The first element specifies the `pub_key` type. 1
== Ed25519. The second element are the pubkey bytes.
- `power`: The validator's voting power.
- `name`: Name of the validator (optional).
- `app_hash`: The expected application hash (as returned by the
`ResponseInfo` ABCI message) upon genesis. If the app's hash does
not match, Tendermint will panic.
- `app_state`: The application state (e.g. initial distribution
of tokens).
#### Sample genesis.json


+ 3
- 3
docs/tendermint-core/validators.md View File

@ -2,7 +2,7 @@
Validators are responsible for committing new blocks in the blockchain.
These validators participate in the consensus protocol by broadcasting
*votes* which contain cryptographic signatures signed by each
_votes_ which contain cryptographic signatures signed by each
validator's private key.
Some Proof-of-Stake consensus algorithms aim to create a "completely"
@ -28,12 +28,12 @@ There are two ways to become validator.
## Committing a Block
*+2/3 is short for "more than 2/3"*
_+2/3 is short for "more than 2/3"_
A block is committed when +2/3 of the validator set sign [precommit
votes](../spec/blockchain/blockchain.md#vote) for that block at the same `round`.
The +2/3 set of precommit votes is called a
[*commit*](../spec/blockchain/blockchain.md#commit). While any +2/3 set of
[_commit_](../spec/blockchain/blockchain.md#commit). While any +2/3 set of
precommits for the same block at the same height&round can serve as
validation, the canonical commit is included in the next block (see
[LastCommit](../spec/blockchain/blockchain.md#last-commit)).

+ 2
- 2
docs/tools/benchmarking.md View File

@ -23,7 +23,7 @@ Blocks/sec 0.818 0.386 1 9
[Install Tendermint](../introduction/install)
This currently is setup to work on tendermint's develop branch. Please ensure
you are on that. (If not, update `tendermint` and `tmlibs` in gopkg.toml to use
the master branch.)
the master branch.)
then run:
@ -32,7 +32,7 @@ tendermint init
tendermint node --proxy_app=kvstore
```
```
```
tm-bench localhost:26657
```


+ 2
- 1
docs/tools/monitoring.md View File

@ -26,6 +26,7 @@ use `kvstore`:
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint init
docker run -it --rm -v "/tmp:/tendermint" -p "26657:26657" --name=tm tendermint/tendermint node --proxy_app=kvstore
```
```
docker run -it --rm -p "26670:26670" --link=tm tendermint/monitor tm:26657
```
@ -71,7 +72,7 @@ Flags:
Run `tm-monitor` and visit http://localhost:26670 You should see the
list of the available RPC endpoints:
```
```
http://localhost:26670/status
http://localhost:26670/status/network
http://localhost:26670/monitor?endpoint=_


+ 2611
- 0
docs/yarn.lock
File diff suppressed because it is too large
View File


Loading…
Cancel
Save