Browse Source

spec: remove reactor section (#242)

Co-authored-by: Tess Rinearson <tess.rinearson@gmail.com>
pull/7804/head
Marko 4 years ago
committed by GitHub
parent
commit
72d15a4b07
No known key found for this signature in database GPG Key ID: 4AEE18F83AFDEB23
53 changed files with 542 additions and 5366 deletions
  1. +0
    -50
      rust-spec/fastsync/README.md
  2. +0
    -113
      rust-spec/fastsync/Tinychain.tla
  3. +0
    -1216
      rust-spec/fastsync/fastsync.md
  4. +0
    -825
      rust-spec/fastsync/fastsync.tla
  5. +0
    -606
      rust-spec/fastsync/scheduler.tla
  6. +0
    -13
      rust-spec/fastsync/verification/001bmc-apalache.csv
  7. +0
    -9
      rust-spec/fastsync/verification/002tlc-tlc.csv
  8. +0
    -10
      rust-spec/fastsync/verification/MC-CorrectBlocksInv.cfg
  9. +0
    -10
      rust-spec/fastsync/verification/MC-CorrectNeverSuspectedInv.cfg
  10. +0
    -10
      rust-spec/fastsync/verification/MC-Sync1AsInv.cfg
  11. +0
    -10
      rust-spec/fastsync/verification/MC-Sync2AsInv.cfg
  12. +0
    -10
      rust-spec/fastsync/verification/MC-Sync3AsInv.cfg
  13. +0
    -10
      rust-spec/fastsync/verification/MC-SyncFromCorrectInv.cfg
  14. +0
    -10
      rust-spec/fastsync/verification/MC-Termination.cfg
  15. +0
    -10
      rust-spec/fastsync/verification/MC-TerminationByTO.cfg
  16. +0
    -10
      rust-spec/fastsync/verification/MC-TerminationCorr.cfg
  17. +0
    -17
      rust-spec/fastsync/verification/MC_1_0_4.tla
  18. +0
    -15
      rust-spec/fastsync/verification/MC_1_1_4.tla
  19. +0
    -15
      rust-spec/fastsync/verification/MC_1_2_4.tla
  20. +0
    -15
      rust-spec/fastsync/verification/MC_1_2_5.tla
  21. +0
    -15
      rust-spec/fastsync/verification/MC_1_3_5.tla
  22. +0
    -17
      rust-spec/fastsync/verification/MC_2_0_4.tla
  23. +0
    -110
      rust-spec/fastsync/verification/Tinychain.tla
  24. +0
    -822
      rust-spec/fastsync/verification/fastsync_apalache.tla
  25. +3
    -0
      spec/consensus/bft-time.md
  26. +3
    -0
      spec/consensus/consensus.md
  27. +3
    -0
      spec/consensus/creating-proposal.md
  28. +6
    -0
      spec/consensus/light-client/README.md
  29. +62
    -58
      spec/consensus/proposer-selection.md
  30. +40
    -4
      spec/core/data_structures.md
  31. +19
    -0
      spec/p2p/messages/README.md
  32. +68
    -0
      spec/p2p/messages/block-sync.md
  33. +149
    -0
      spec/p2p/messages/consensus.md
  34. +23
    -0
      spec/p2p/messages/evidence.md
  35. +33
    -0
      spec/p2p/messages/mempool.md
  36. +48
    -0
      spec/p2p/messages/pex.md
  37. +85
    -0
      spec/p2p/messages/state-sync.md
  38. BIN
      spec/reactors/block_sync/img/bc-reactor-routines.png
  39. BIN
      spec/reactors/block_sync/img/bc-reactor.png
  40. +0
    -43
      spec/reactors/block_sync/impl.md
  41. +0
    -308
      spec/reactors/block_sync/reactor.md
  42. +0
    -366
      spec/reactors/consensus/consensus-reactor.md
  43. +0
    -187
      spec/reactors/consensus/consensus.md
  44. +0
    -10
      spec/reactors/evidence/reactor.md
  45. +0
    -8
      spec/reactors/mempool/concurrency.md
  46. +0
    -54
      spec/reactors/mempool/config.md
  47. +0
    -43
      spec/reactors/mempool/functionality.md
  48. +0
    -52
      spec/reactors/mempool/messages.md
  49. +0
    -27
      spec/reactors/mempool/reactor.md
  50. +0
    -164
      spec/reactors/pex/pex.md
  51. +0
    -12
      spec/reactors/pex/reactor.md
  52. +0
    -5
      spec/reactors/readme.md
  53. +0
    -77
      spec/reactors/state_sync/reactor.md

+ 0
- 50
rust-spec/fastsync/README.md View File

@ -1,50 +0,0 @@
# Fast Sync Subprotocol Specification
This directory contains English and TLA+ specifications for the FastSync
protocol as it is currently implemented in the Tendermint Core codebase.
## English Specification
The [English Specification](fastsync.md) provides a detailed description of the
fast sync problem and the properties a correct protocol must satisfy. It also
includes a detailed description of the protocol as currently implemented in Go,
and an anlaysis of the implementation with respect to the properties.
It was found that the current implementation does not satisfy certain
properties, and is therefore not a correct solution to the fast sync problem.
The issue discovered holds for all previous implementations of the protocol. A
fix is described which is straight forward to implement.
## TLA+ Specification
Two TLA+ specifications are provided: a high level [specification
of the protocol](fastsync.tla) and a low level [specification of the scheduler
component of the implementation](scheduler.tla). Both specifications contain
properties that may be checked by the TLC model checker, though only for small
values of the relevant parameters.
We will continue to refine these specifications in our research work,
to deduplicate
the redundancies between them, improve their utility to researchers and
engineers, and to improve their verifiability. For now, they provide a complete
description of the fast sync protocol in TLA+; especially the
[scheduler.tla](scheduler.tla), which maps very closely to the current
implementation of the [scheduler in Go](https://github.com/tendermint/tendermint/blob/master/blockchain/v2/scheduler.go).
The [scheduler.tla](scheduler.tla) can be model checked in TLC with the following
parameters:
- Constants:
- numRequests <- 2
- PeerIDs <- 0..2
- ultimateHeight <- 3
- Invariants:
- TypeOK
- Properties:
- TerminationWhenNoAdvance
- TerminationGoodPeers
- TerminationAllCases
- Proofs that properties are not vacuously true:
- TerminationGoodPeersPre
- TerminationAllCases
- SchedulerIncreasePre

+ 0
- 113
rust-spec/fastsync/Tinychain.tla View File

@ -1,113 +0,0 @@
-------------------------- MODULE Tinychain ----------------------------------
(* A very abstract model of Tendermint blockchain. Its only purpose is to highlight
the relation between validator sets, next validator sets, and last commits.
*)
EXTENDS Integers
\* type annotation
a <: b == a
\* the type of validator sets, e.g., STRING
VST == STRING
\* LastCommit type.
\* It contains:
\* 1) the flag of whether the block id equals to the hash of the previous block, and
\* 2) the set of the validators who have committed the block.
\* In the implementation, blockId is the hash of the previous block, which cannot be forged.
\* We abstract block id into whether it equals to the hash of the previous block or not.
LCT == [blockIdEqRef |-> BOOLEAN, committers |-> VST]
\* Block type.
\* A block contains its height, validator set, next validator set, and last commit.
\* Moreover, it contains the flag that tells us whether the block is equal to the one
\* on the reference chain (this is an abstraction of hash).
BT == [height |-> Int, hashEqRef |-> BOOLEAN, wellFormed |-> BOOLEAN,
VS |-> VST, NextVS |-> VST, lastCommit |-> LCT]
CONSTANTS
(*
A set of abstract values, each value representing a set of validators.
For the purposes of this specification, they can be any values,
e.g., "s1", "s2", etc.
*)
VALIDATOR_SETS,
(* a nil validator set that is outside of VALIDATOR_SETS *)
NIL_VS,
(* The maximal height, up to which the blockchain may grow *)
MAX_HEIGHT
Heights == 1..MAX_HEIGHT
\* the set of all potential commits
Commits == [blockIdEqRef: BOOLEAN, committers: VALIDATOR_SETS]
\* the set of all potential blocks, not necessarily coming from the blockchain
Blocks ==
[height: Heights, hashEqRef: BOOLEAN, wellFormed: BOOLEAN,
VS: VALIDATOR_SETS, NextVS: VALIDATOR_SETS, lastCommit: Commits]
\* Does the chain contain a sound sequence of blocks that could be produced by
\* a 2/3 of faulty validators. This operator can be used to initialise the chain!
\* Since we are abstracting validator sets with VALIDATOR_SETS, which are
\* 2/3 quorums, we just compare committers to those sets. In a more detailed
\* specification, one would write the \subseteq operator instead of equality.
IsCorrectChain(chain) ==
\* restrict the structure of the blocks, to decrease the TLC search space
LET OkCommits == [blockIdEqRef: {TRUE}, committers: VALIDATOR_SETS]
OkBlocks == [height: Heights, hashEqRef: {TRUE}, wellFormed: {TRUE},
VS: VALIDATOR_SETS, NextVS: VALIDATOR_SETS, lastCommit: OkCommits]
IN
/\ chain \in [1..MAX_HEIGHT -> OkBlocks]
/\ \A h \in 1..MAX_HEIGHT:
LET b == chain[h] IN
/\ b.height = h \* the height is correct
/\ h > 1 =>
LET p == chain[h - 1] IN
/\ b.VS = p.NextVS \* the validators propagate from the previous block
/\ b.lastCommit.committers = p.VS \* and they are the committers
\* The basic properties of blocks on the blockchain:
\* They should pass the validity check and they may verify the next block.
\* Does the block pass the consistency check against the next validators of the previous block
IsMatchingValidators(block, nextVS) ==
\* simply check that the validator set is propagated correctly.
\* (the implementation tests hashes and the application state)
block.VS = nextVS
\* Does the block verify the commit (of the next block)
PossibleCommit(block, commit) ==
\* the commits are signed by the block validators
/\ commit.committers = block.VS
\* The block id in the commit matches the block hash (abstract comparison).
\* (The implementation has extensive tests for that.)
\* this is an abstraction of: commit.blockId = hash(block)
\*
\* These are possible scenarios on the concrete hashes:
\*
\* scenario 1: commit.blockId = 10 /\ hash(block) = 10 /\ hash(ref) = 10
\* scenario 2: commit.blockId = 20 /\ hash(block) = 20 /\ block.VS /= ref.VS
\* scenario 3: commit.blockId = 50 /\ hash(block) = 100
\* scenario 4: commit.blockId = 10 /\ hash(block) = 100
\* scenario 5: commit.blockId = 100 /\ hash(block) = 10
/\ commit.blockIdEqRef = block.hashEqRef
\* the following test would be cheating, as we do not have access to the
\* reference chain:
\* /\ commit.blockIdEqRef
\* Basic invariants
\* every block has the validator set that is chosen by its predecessor
ValidBlockInv(chain) ==
\A h \in 2..MAX_HEIGHT:
IsMatchingValidators(chain[h], chain[h - 1].NextVS)
\* last commit of every block is signed by the validators of the predecessor
VerifiedBlockInv(chain) ==
\A h \in 2..MAX_HEIGHT:
PossibleCommit(chain[h - 1], chain[h].lastCommit)
==================================================================================

+ 0
- 1216
rust-spec/fastsync/fastsync.md
File diff suppressed because it is too large
View File


+ 0
- 825
rust-spec/fastsync/fastsync.tla View File

@ -1,825 +0,0 @@
----------------------------- MODULE fastsync -----------------------------
(*
In this document we give the high level specification of the fast sync
protocol as implemented here:
https://github.com/tendermint/tendermint/tree/master/blockchain/v2.
We assume a system in which one node is trying to sync with the blockchain
(replicated state machine) by downloading blocks from the set of full nodes
(we call them peers) that are block providers, and executing transactions
(part of the block) against the application.
Peers can be faulty, and we don't make any assumption about the rate of correct/faulty
nodes in the node peerset (i.e., they can all be faulty). Correct peers are part
of the replicated state machine, i.e., they manage blockchain and execute
transactions against the same deterministic application. We don't make any
assumptions about the behavior of faulty processes. Processes (client and peers)
communicate by message passing.
In this specification, we model this system with two parties:
- the node (state machine) that is doing fastsync and
- the environment with which node interacts.
The environment consists of the set of (correct and faulty) peers with
which node interacts as part of fast sync protocol, but also contains some
aspects (adding/removing peers, timeout mechanism) that are part of the node
local environment (could be seen as part of the runtime in which node
executes).
As part of the fast sync protocol a node and the peers exchange the following messages:
- StatusRequest
- StatusResponse
- BlockRequest
- BlockResponse
- NoBlockResponse.
A node is periodically issuing StatusRequests to query peers for their current height (to decide what
blocks to ask from what peers). Based on StatusResponses (that are sent by peers), the node queries
blocks for some height(s) by sending peers BlockRequest messages. A peer provides a requested block by
BlockResponse message. If a peer does not want to provide a requested block, then it sends NoBlockResponse message.
In addition to those messages, a node in this spec receives additional input messages (events):
- AddPeer
- RemovePeer
- SyncTimeout.
These are the control messages that are provided to the node by its execution enviornment. AddPeer
is for the case when a connection is established with a peer; similarly RemovePeer is for the case
a connection with the peer is terminated. Finally SyncTimeout is used to model a timeout trigger.
We assume that fast sync protocol starts when connections with some number of peers
are established. Therefore, peer set is initialised with non-empty set of peer ids. Note however
that node does not know initially the peer heights.
*)
EXTENDS Integers, FiniteSets, Sequences
CONSTANTS MAX_HEIGHT, \* the maximal height of blockchain
VALIDATOR_SETS, \* abstract set of validators
NIL_VS, \* a nil validator set
CORRECT, \* set of correct peers
FAULTY, \* set of faulty peers
TARGET_PENDING, \* maximum number of pending requests + downloaded blocks that are not yet processed
PEER_MAX_REQUESTS \* maximum number of pending requests per peer
ASSUME CORRECT \intersect FAULTY = {}
ASSUME TARGET_PENDING > 0
ASSUME PEER_MAX_REQUESTS > 0
\* the blockchain, see Tinychain
VARIABLE chain
\* introduce tiny chain as the source of blocks for the correct nodes
INSTANCE Tinychain
\* a special value for an undefined height
NilHeight == 0
\* the height of the genesis block
TrustedHeight == 1
\* the set of all peer ids the node can receive a message from
AllPeerIds == CORRECT \union FAULTY
\* Correct last commit have enough voting power, i.e., +2/3 of the voting power of
\* the corresponding validator set (enoughVotingPower = TRUE) that signs blockId.
\* BlockId defines correct previous block (in the implementation it is the hash of the block).
\* Instead of blockId, we encode blockIdEqRef, which is true, if the block id is equal
\* to the hash of the previous block, see Tinychain.
CorrectLastCommit(h) == chain[h].lastCommit
NilCommit == [blockIdEqRef |-> FALSE, committers |-> NIL_VS]
\* correct node always supplies the blocks from the blockchain
CorrectBlock(h) == chain[h]
NilBlock ==
[height |-> 0, hashEqRef |-> FALSE, wellFormed |-> FALSE,
lastCommit |-> NilCommit, VS |-> NIL_VS, NextVS |-> NIL_VS]
\* a special value for an undefined peer
NilPeer == "Nil" \* STRING for apalache efficiency
\* control the state of the syncing node
States == { "running", "finished"}
NoMsg == [type |-> "None"]
\* the variables of the node running fastsync
VARIABLES
state, \* running or finished
(*
blockPool [
height, \* current height we are trying to sync. Last block executed is height - 1
peerIds, \* set of peers node is connected to
peerHeights, \* map of peer ids to its (stated) height
blockStore, \* map of heights to (received) blocks
receivedBlocks, \* map of heights to peer that has sent us the block (stored in blockStore)
pendingBlocks, \* map of heights to peer to which block request has been sent
syncHeight, \* height at the point syncTimeout was triggered last time
syncedBlocks \* number of blocks synced since last syncTimeout. If it is 0 when the next timeout occurs, then protocol terminates.
]
*)
blockPool
\* the variables of the peers providing blocks
VARIABLES
(*
peersState [
peerHeights, \* track peer heights
statusRequested, \* boolean set to true when StatusRequest is received. Models periodic sending of StatusRequests.
blocksRequested \* set of BlockRequests received that are not answered yet
]
*)
peersState
\* the variables for the network and scheduler
VARIABLES
turn, \* who is taking the turn: "Peers" or "Node"
inMsg, \* a node receives message by this variable
outMsg \* a node sends a message by this variable
(* the variables of the node *)
nvars == <<state, blockPool>>
(*************** Type definitions for Apalache (model checker) **********************)
AsIntSet(S) == S <: {Int}
\* type of process ids
PIDT == STRING
AsPidSet(S) == S <: {PIDT}
\* ControlMessage type
CMT == [type |-> STRING, peerId |-> PIDT] \* type of control messages
\* InMsg type
IMT == [type |-> STRING, peerId |-> PIDT, height |-> Int, block |-> BT]
AsInMsg(m) == m <: IMT
AsInMsgSet(S) == S <: {IMT}
\* OutMsg type
OMT == [type |-> STRING, peerId |-> PIDT, height |-> Int]
AsOutMsg(m) == m <: OMT
AsOutMsgSet(S) == S <: {OMT}
\* block pool type
BPT == [height |-> Int, peerIds |-> {PIDT}, peerHeights |-> [PIDT -> Int],
blockStore |-> [Int -> BT], receivedBlocks |-> [Int -> PIDT],
pendingBlocks |-> [Int -> PIDT], syncedBlocks |-> Int, syncHeight |-> Int]
AsBlockPool(bp) == bp <: BPT
(******************** Sets of messages ********************************)
\* Control messages
ControlMsgs ==
AsInMsgSet([type: {"addPeer"}, peerId: AllPeerIds])
\union
AsInMsgSet([type: {"removePeer"}, peerId: AllPeerIds])
\union
AsInMsgSet([type: {"syncTimeout"}])
\* All messages (and events) received by a node
InMsgs ==
AsInMsgSet({NoMsg})
\union
AsInMsgSet([type: {"blockResponse"}, peerId: AllPeerIds, block: Blocks])
\union
AsInMsgSet([type: {"noBlockResponse"}, peerId: AllPeerIds, height: Heights])
\union
AsInMsgSet([type: {"statusResponse"}, peerId: AllPeerIds, height: Heights])
\union
ControlMsgs
\* Messages sent by a node and received by peers (environment in our case)
OutMsgs ==
AsOutMsgSet({NoMsg})
\union
AsOutMsgSet([type: {"statusRequest"}]) \* StatusRequest is broadcast to the set of connected peers.
\union
AsOutMsgSet([type: {"blockRequest"}, peerId: AllPeerIds, height: Heights])
(********************************** NODE ***********************************)
InitNode ==
\E pIds \in SUBSET AllPeerIds: \* set of peers node established initial connections with
/\ pIds \subseteq CORRECT \* this line is not necessary
/\ pIds /= AsPidSet({}) \* apalache better checks non-emptiness than subtracts from SUBSET
/\ blockPool = AsBlockPool([
height |-> TrustedHeight + 1, \* the genesis block is at height 1
syncHeight |-> TrustedHeight + 1, \* and we are synchronized to it
peerIds |-> pIds,
peerHeights |-> [p \in AllPeerIds |-> NilHeight], \* no peer height is known
blockStore |->
[h \in Heights |->
IF h > TrustedHeight THEN NilBlock ELSE chain[1]],
receivedBlocks |-> [h \in Heights |-> NilPeer],
pendingBlocks |-> [h \in Heights |-> NilPeer],
syncedBlocks |-> -1
])
/\ state = "running"
\* Remove faulty peers.
\* Returns new block pool.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L222
RemovePeers(rmPeers, bPool) ==
LET keepPeers == bPool.peerIds \ rmPeers IN
LET pHeights ==
[p \in AllPeerIds |-> IF p \in rmPeers THEN NilHeight ELSE bPool.peerHeights[p]] IN
LET failedRequests ==
{h \in Heights: /\ h >= bPool.height
/\ \/ bPool.pendingBlocks[h] \in rmPeers
\/ bPool.receivedBlocks[h] \in rmPeers} IN
LET pBlocks ==
[h \in Heights |-> IF h \in failedRequests THEN NilPeer ELSE bPool.pendingBlocks[h]] IN
LET rBlocks ==
[h \in Heights |-> IF h \in failedRequests THEN NilPeer ELSE bPool.receivedBlocks[h]] IN
LET bStore ==
[h \in Heights |-> IF h \in failedRequests THEN NilBlock ELSE bPool.blockStore[h]] IN
IF keepPeers /= bPool.peerIds
THEN [bPool EXCEPT
!.peerIds = keepPeers,
!.peerHeights = pHeights,
!.pendingBlocks = pBlocks,
!.receivedBlocks = rBlocks,
!.blockStore = bStore
]
ELSE bPool
\* Add a peer.
\* see https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L198
AddPeer(peer, bPool) ==
[bPool EXCEPT !.peerIds = bPool.peerIds \union {peer}]
(*
Handle StatusResponse message.
If valid status response, update peerHeights.
If invalid (height is smaller than the current), then remove peer.
Returns new block pool.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L667
*)
HandleStatusResponse(msg, bPool) ==
LET peerHeight == bPool.peerHeights[msg.peerId] IN
IF /\ msg.peerId \in bPool.peerIds
/\ msg.height >= peerHeight
THEN \* a correct response
LET pHeights == [bPool.peerHeights EXCEPT ![msg.peerId] = msg.height] IN
[bPool EXCEPT !.peerHeights = pHeights]
ELSE RemovePeers({msg.peerId}, bPool) \* the peer has sent us message with smaller height or peer is not in our peer list
(*
Handle BlockResponse message.
If valid block response, update blockStore, pendingBlocks and receivedBlocks.
If invalid (unsolicited response or malformed block), then remove peer.
Returns new block pool.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L522
*)
HandleBlockResponse(msg, bPool) ==
LET h == msg.block.height IN
IF /\ msg.peerId \in bPool.peerIds
/\ bPool.blockStore[h] = NilBlock
/\ bPool.pendingBlocks[h] = msg.peerId
/\ msg.block.wellFormed
THEN
[bPool EXCEPT
!.blockStore = [bPool.blockStore EXCEPT ![h] = msg.block],
!.receivedBlocks = [bPool.receivedBlocks EXCEPT![h] = msg.peerId],
!.pendingBlocks = [bPool.pendingBlocks EXCEPT![h] = NilPeer]
]
ELSE RemovePeers({msg.peerId}, bPool)
HandleNoBlockResponse(msg, bPool) ==
RemovePeers({msg.peerId}, bPool)
\* Compute max peer height.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L440
MaxPeerHeight(bPool) ==
IF bPool.peerIds = AsPidSet({})
THEN 0 \* no peers, just return 0
ELSE LET Hts == {bPool.peerHeights[p] : p \in bPool.peerIds} IN
CHOOSE max \in Hts: \A h \in Hts: h <= max
(* Returns next height for which request should be sent.
Returns NilHeight in case there is no height for which request can be sent.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L454 *)
FindNextRequestHeight(bPool) ==
LET S == {i \in Heights:
/\ i >= bPool.height
/\ i <= MaxPeerHeight(bPool)
/\ bPool.blockStore[i] = NilBlock
/\ bPool.pendingBlocks[i] = NilPeer} IN
IF S = AsIntSet({})
THEN NilHeight
ELSE
CHOOSE min \in S: \A h \in S: h >= min
\* Returns number of pending requests for a given peer.
NumOfPendingRequests(bPool, peer) ==
LET peerPendingRequests ==
{h \in Heights:
/\ h >= bPool.height
/\ bPool.pendingBlocks[h] = peer
}
IN
Cardinality(peerPendingRequests)
(* Returns peer that can serve block for a given height.
Returns NilPeer in case there are no such peer.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L477 *)
FindPeerToServe(bPool, h) ==
LET peersThatCanServe == { p \in bPool.peerIds:
/\ bPool.peerHeights[p] >= h
/\ NumOfPendingRequests(bPool, p) < PEER_MAX_REQUESTS } IN
LET pendingBlocks ==
{i \in Heights:
/\ i >= bPool.height
/\ \/ bPool.pendingBlocks[i] /= NilPeer
\/ bPool.blockStore[i] /= NilBlock
} IN
IF \/ peersThatCanServe = AsPidSet({})
\/ Cardinality(pendingBlocks) >= TARGET_PENDING
THEN NilPeer
\* pick a peer that can serve request for height h that has minimum number of pending requests
ELSE CHOOSE p \in peersThatCanServe: \A q \in peersThatCanServe:
/\ NumOfPendingRequests(bPool, p) <= NumOfPendingRequests(bPool, q)
\* Make a request for a block (if possible) and return a request message and block poool.
CreateRequest(bPool) ==
LET nextHeight == FindNextRequestHeight(bPool) IN
IF nextHeight = NilHeight THEN [msg |-> AsOutMsg(NoMsg), pool |-> bPool]
ELSE
LET peer == FindPeerToServe(bPool, nextHeight) IN
IF peer = NilPeer THEN [msg |-> AsOutMsg(NoMsg), pool |-> bPool]
ELSE
LET m == [type |-> "blockRequest", peerId |-> peer, height |-> nextHeight] IN
LET newPool == [bPool EXCEPT
!.pendingBlocks = [bPool.pendingBlocks EXCEPT ![nextHeight] = peer]
] IN
[msg |-> m, pool |-> newPool]
\* Returns node state, i.e., defines termination condition.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L432
ComputeNextState(bPool) ==
IF bPool.syncedBlocks = 0 \* corresponds to the syncTimeout in case no progress has been made for a period of time.
THEN "finished"
ELSE IF /\ bPool.height > 1
/\ bPool.height >= MaxPeerHeight(bPool) \* see https://github.com/tendermint/tendermint/blob/61057a8b0af2beadee106e47c4616b279e83c920/blockchain/v2/scheduler.go#L566
THEN "finished"
ELSE "running"
(* Verify if commit is for the given block id and if commit has enough voting power.
See https://github.com/tendermint/tendermint/blob/61057a8b0af2beadee106e47c4616b279e83c920/blockchain/v2/processor_context.go#L12 *)
VerifyCommit(block, lastCommit) ==
PossibleCommit(block, lastCommit)
(* Tries to execute next block in the pool, i.e., defines block validation logic.
Returns new block pool (peers that has send invalid blocks are removed).
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/processor.go#L135 *)
ExecuteBlocks(bPool) ==
LET bStore == bPool.blockStore IN
LET block0 == bStore[bPool.height - 1] IN
\* blockPool is initialized with height = TrustedHeight + 1,
\* so bStore[bPool.height - 1] is well defined
LET block1 == bStore[bPool.height] IN
LET block2 == bStore[bPool.height + 1] IN
IF block1 = NilBlock \/ block2 = NilBlock
THEN bPool \* we don't have two next consecutive blocks
ELSE IF ~IsMatchingValidators(block1, block0.NextVS)
\* Check that block1.VS = block0.Next.
\* Otherwise, CorrectBlocksInv fails.
\* In the implementation NextVS is part of the application state,
\* so a mismatch can be found without access to block0.NextVS.
THEN \* the block does not have the expected validator set
RemovePeers({bPool.receivedBlocks[bPool.height]}, bPool)
ELSE IF ~VerifyCommit(block1, block2.lastCommit)
\* Verify commit of block2 based on block1.
\* Interestingly, we do not have to call IsMatchingValidators.
THEN \* remove the peers of block1 and block2, as they are considered faulty
RemovePeers({bPool.receivedBlocks[bPool.height],
bPool.receivedBlocks[bPool.height + 1]},
bPool)
ELSE \* all good, execute block at position height
[bPool EXCEPT !.height = bPool.height + 1]
\* Defines logic for pruning peers.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L613
TryPrunePeer(bPool, suspectedSet, isTimedOut) ==
(* -----------------------------------------------------------------------------------------------------------------------*)
(* Corresponds to function prunablePeers in scheduler.go file. Note that this function only checks if block has been *)
(* received from a peer during peerTimeout period. *)
(* Note that in case no request has been scheduled to a correct peer, or a request has been scheduled *)
(* recently, so the peer hasn't responded yet, a peer will be removed as no block is received within peerTimeout. *)
(* In case of faulty peers, we don't have any guarantee that they will respond. *)
(* Therefore, we model this with nondeterministic behavior as it could lead to peer removal, for both correct and faulty. *)
(* See scheduler.go *)
(* https://github.com/tendermint/tendermint/blob/4298bbcc4e25be78e3c4f21979d6aa01aede6e87/blockchain/v2/scheduler.go#L335 *)
LET toRemovePeers == bPool.peerIds \intersect suspectedSet IN
(*
Corresponds to logic for pruning a peer that is responsible for delivering block for the next height.
The pruning logic for the next height is based on the time when a BlockRequest is sent. Therefore, if a request is sent
to a correct peer for the next height (blockPool.height), it should never be removed by this check as we assume that
correct peers respond timely and reliably. However, if a request is sent to a faulty peer then we
might get response on time or not, which is modelled with nondeterministic isTimedOut flag.
See scheduler.go
https://github.com/tendermint/tendermint/blob/4298bbcc4e25be78e3c4f21979d6aa01aede6e87/blockchain/v2/scheduler.go#L617
*)
LET nextHeightPeer == bPool.pendingBlocks[bPool.height] IN
LET prunablePeers ==
IF /\ nextHeightPeer /= NilPeer
/\ nextHeightPeer \in FAULTY
/\ isTimedOut
THEN toRemovePeers \union {nextHeightPeer}
ELSE toRemovePeers
IN
RemovePeers(prunablePeers, bPool)
\* Handle SyncTimeout. It models if progress has been made (height has increased) since the last SyncTimeout event.
HandleSyncTimeout(bPool) ==
[bPool EXCEPT
!.syncedBlocks = bPool.height - bPool.syncHeight,
!.syncHeight = bPool.height
]
HandleResponse(msg, bPool) ==
IF msg.type = "blockResponse" THEN
HandleBlockResponse(msg, bPool)
ELSE IF msg.type = "noBlockResponse" THEN
HandleNoBlockResponse(msg, bPool)
ELSE IF msg.type = "statusResponse" THEN
HandleStatusResponse(msg, bPool)
ELSE IF msg.type = "addPeer" THEN
AddPeer(msg.peerId, bPool)
ELSE IF msg.type = "removePeer" THEN
RemovePeers({msg.peerId}, bPool)
ELSE IF msg.type = "syncTimeout" THEN
HandleSyncTimeout(bPool)
ELSE
bPool
(*
At every node step we executed the following steps (atomically):
1) input message is consumed and the corresponding handler is called,
2) pruning logic is called
3) block execution is triggered (we try to execute block at next height)
4) a request to a peer is made (if possible) and
5) we decide if termination condition is satisifed so we stop.
*)
NodeStep ==
\E suspectedSet \in SUBSET AllPeerIds: \* suspectedSet is a nondeterministic set of peers
\E isTimedOut \in BOOLEAN:
LET bPool == HandleResponse(inMsg, blockPool) IN
LET bp == TryPrunePeer(bPool, suspectedSet, isTimedOut) IN
LET nbPool == ExecuteBlocks(bp) IN
LET msgAndPool == CreateRequest(nbPool) IN
LET nstate == ComputeNextState(msgAndPool.pool) IN
/\ state' = nstate
/\ blockPool' = msgAndPool.pool
/\ outMsg' = msgAndPool.msg
/\ inMsg' = AsInMsg(NoMsg)
\* If node is running, then in every step we try to create blockRequest.
\* In addition, input message (if exists) is consumed and processed.
NextNode ==
\/ /\ state = "running"
/\ NodeStep
\/ /\ state = "finished"
/\ UNCHANGED <<nvars, inMsg, outMsg>>
(********************************** Peers ***********************************)
InitPeers ==
\E pHeights \in [AllPeerIds -> Heights]:
peersState = [
peerHeights |-> pHeights,
statusRequested |-> FALSE,
blocksRequested |-> AsOutMsgSet({})
]
HandleStatusRequest(msg, pState) ==
[pState EXCEPT
!.statusRequested = TRUE
]
HandleBlockRequest(msg, pState) ==
[pState EXCEPT
!.blocksRequested = pState.blocksRequested \union AsOutMsgSet({msg})
]
HandleRequest(msg, pState) ==
IF msg = AsOutMsg(NoMsg)
THEN pState
ELSE IF msg.type = "statusRequest"
THEN HandleStatusRequest(msg, pState)
ELSE HandleBlockRequest(msg, pState)
CreateStatusResponse(peer, pState, anyHeight) ==
LET m ==
IF peer \in CORRECT
THEN AsInMsg([type |-> "statusResponse", peerId |-> peer, height |-> pState.peerHeights[peer]])
ELSE AsInMsg([type |-> "statusResponse", peerId |-> peer, height |-> anyHeight]) IN
[msg |-> m, peers |-> pState]
CreateBlockResponse(msg, pState, arbitraryBlock) ==
LET m ==
IF msg.peerId \in CORRECT
THEN AsInMsg([type |-> "blockResponse", peerId |-> msg.peerId, block |-> CorrectBlock(msg.height)])
ELSE AsInMsg([type |-> "blockResponse", peerId |-> msg.peerId, block |-> arbitraryBlock]) IN
LET npState ==
[pState EXCEPT
!.blocksRequested = pState.blocksRequested \ {msg}
] IN
[msg |-> m, peers |-> npState]
GrowPeerHeight(pState) ==
\E p \in CORRECT:
/\ pState.peerHeights[p] < MAX_HEIGHT
/\ peersState' = [pState EXCEPT !.peerHeights[p] = @ + 1]
/\ inMsg' = AsInMsg(NoMsg)
SendStatusResponseMessage(pState) ==
/\ \E arbitraryHeight \in Heights:
\E peer \in AllPeerIds:
LET msgAndPeers == CreateStatusResponse(peer, pState, arbitraryHeight) IN
/\ peersState' = msgAndPeers.peers
/\ inMsg' = msgAndPeers.msg
SendAddPeerMessage ==
\E peer \in AllPeerIds:
inMsg' = AsInMsg([type |-> "addPeer", peerId |-> peer])
SendRemovePeerMessage ==
\E peer \in AllPeerIds:
inMsg' = AsInMsg([type |-> "removePeer", peerId |-> peer])
SendSyncTimeoutMessage ==
inMsg' = AsInMsg([type |-> "syncTimeout"])
SendControlMessage ==
\/ SendAddPeerMessage
\/ SendRemovePeerMessage
\/ SendSyncTimeoutMessage
\* An extremely important property of block hashes (blockId):
\* If the block hash coincides with the hash of the reference block,
\* then the blocks should be equal.
UnforgeableBlockId(height, block) ==
block.hashEqRef => block = chain[height]
\* A faulty peer cannot forge enough of the validators signatures.
\* In other words: If a commit contains enough signatures from the validators (in reality 2/3, in the model all),
\* then the blockID points to the block on the chain, encoded as block.lastCommit.blockIdEqRef being true
\* A more precise rule should have checked that the commiters have over 2/3 of the VS's voting power.
NoFork(height, block) ==
(height > 1 /\ block.lastCommit.committers = chain[height - 1].VS)
=> block.lastCommit.blockIdEqRef
\* Can be block produced by a faulty peer, assuming it cannot generate forks (basic assumption of the protocol)
IsBlockByFaulty(height, block) ==
/\ block.height = height
/\ UnforgeableBlockId(height, block)
/\ NoFork(height, block)
SendBlockResponseMessage(pState) ==
\* a response to a requested block: either by a correct, or by a faulty peer
\/ /\ pState.blocksRequested /= AsOutMsgSet({})
/\ \E msg \in pState.blocksRequested:
\E block \in Blocks:
/\ IsBlockByFaulty(msg.height, block)
/\ LET msgAndPeers == CreateBlockResponse(msg, pState, block) IN
/\ peersState' = msgAndPeers.peers
/\ inMsg' = msgAndPeers.msg
\* a faulty peer can always send an unsolicited block
\/ \E peerId \in FAULTY:
\E block \in Blocks:
/\ IsBlockByFaulty(block.height, block)
/\ peersState' = pState
/\ inMsg' = AsInMsg([type |-> "blockResponse",
peerId |-> peerId, block |-> block])
SendNoBlockResponseMessage(pState) ==
/\ peersState' = pState
/\ inMsg' \in AsInMsgSet([type: {"noBlockResponse"}, peerId: FAULTY, height: Heights])
SendResponseMessage(pState) ==
\/ SendBlockResponseMessage(pState)
\/ SendNoBlockResponseMessage(pState)
\/ SendStatusResponseMessage(pState)
NextEnvStep(pState) ==
\/ SendResponseMessage(pState)
\/ GrowPeerHeight(pState)
\/ SendControlMessage /\ peersState' = pState
\* note that we propagate pState that was missing in the previous version
\* Peers consume a message and update it's local state. It then makes a single step, i.e., it sends at most single message.
\* Message sent could be either a response to a request or faulty message (sent by faulty processes).
NextPeers ==
LET pState == HandleRequest(outMsg, peersState) IN
/\ outMsg' = AsOutMsg(NoMsg)
/\ NextEnvStep(pState)
\* the composition of the node, the peers, the network and scheduler
Init ==
/\ IsCorrectChain(chain) \* initialize the blockchain
/\ InitNode
/\ InitPeers
/\ turn = "Peers"
/\ inMsg = AsInMsg(NoMsg)
/\ outMsg = AsOutMsg([type |-> "statusRequest"])
Next ==
IF turn = "Peers"
THEN
/\ NextPeers
/\ turn' = "Node"
/\ UNCHANGED <<nvars, chain>>
ELSE
/\ NextNode
/\ turn' = "Peers"
/\ UNCHANGED <<peersState, chain>>
FlipTurn ==
turn' =
IF turn = "Peers" THEN
"Node"
ELSE
"Peers"
\* Compute max peer height. Used as a helper operator in properties.
MaxCorrectPeerHeight(bPool) ==
LET correctPeers == {p \in bPool.peerIds: p \in CORRECT} IN
IF correctPeers = AsPidSet({})
THEN 0 \* no peers, just return 0
ELSE LET Hts == {bPool.peerHeights[p] : p \in correctPeers} IN
CHOOSE max \in Hts: \A h \in Hts: h <= max
\* properties to check
TypeOK ==
/\ state \in States
/\ inMsg \in InMsgs
/\ outMsg \in OutMsgs
/\ turn \in {"Peers", "Node"}
/\ peersState \in [
peerHeights: [AllPeerIds -> Heights \union {NilHeight}],
statusRequested: BOOLEAN,
blocksRequested:
SUBSET
[type: {"blockRequest"}, peerId: AllPeerIds, height: Heights]
]
/\ blockPool \in [
height: Heights,
peerIds: SUBSET AllPeerIds,
peerHeights: [AllPeerIds -> Heights \union {NilHeight}],
blockStore: [Heights -> Blocks \union {NilBlock}],
receivedBlocks: [Heights -> AllPeerIds \union {NilPeer}],
pendingBlocks: [Heights -> AllPeerIds \union {NilPeer}],
syncedBlocks: Heights \union {NilHeight, -1},
syncHeight: Heights
]
(* Incorrect synchronization: The last block may be never received *)
Sync1 ==
[](state = "finished" =>
blockPool.height >= MaxCorrectPeerHeight(blockPool))
Sync1AsInv ==
state = "finished" => blockPool.height >= MaxCorrectPeerHeight(blockPool)
(* Incorrect synchronization, as there may be a timeout *)
Sync2 ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ [] (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
Sync2AsInv ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
(* Correct synchronization *)
Sync3 ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ blockPool.syncedBlocks <= 0 \* timeout
\/ [] (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
Sync3AsInv ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ blockPool.syncedBlocks <= 0 \* timeout
\/ (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
(* Naive termination *)
\* This property is violated, as the faulty peers may produce infinitely many responses
Termination ==
WF_turn(FlipTurn) => <>(state = "finished")
(* Termination by timeout: the protocol terminates, if there is a timeout *)
\* the precondition: fair flip turn and eventual timeout when no new blocks were synchronized
TerminationByTOPre ==
/\ WF_turn(FlipTurn)
/\ <>(inMsg.type = "syncTimeout" /\ blockPool.height <= blockPool.syncHeight)
TerminationByTO ==
TerminationByTOPre => <>(state = "finished")
(* The termination property when we only have correct peers *)
\* as correct peers may spam the node with addPeer, removePeer, and statusResponse,
\* we have to enforce eventual response (there are no queues in our spec)
CorrBlockResponse ==
\A h \in Heights:
[](outMsg.type = "blockRequest" /\ outMsg.height = h
=> <>(inMsg.type = "blockResponse" /\ inMsg.block.height = h))
\* a precondition for termination in presence of only correct processes
TerminationCorrPre ==
/\ FAULTY = AsPidSet({})
/\ WF_turn(FlipTurn)
/\ CorrBlockResponse
\* termination when there are only correct processes
TerminationCorr ==
TerminationCorrPre => <>(state = "finished")
\* All synchronized blocks (but the last one) are exactly like in the reference chain
CorrectBlocksInv ==
\/ state /= "finished"
\/ \A h \in 1..(blockPool.height - 1):
blockPool.blockStore[h] = chain[h]
\* A false expectation that the protocol only finishes with the blocks
\* from the processes that had not been suspected in being faulty
SyncFromCorrectInv ==
\/ state /= "finished"
\/ \A h \in 1..blockPool.height:
blockPool.receivedBlocks[h] \in blockPool.peerIds \union {NilPeer}
\* A false expectation that a correct process is never removed from the set of peer ids.
\* A correct process may reply too late and then gets evicted.
CorrectNeverSuspectedInv ==
CORRECT \subseteq blockPool.peerIds
BlockPoolInvariant ==
\A h \in Heights:
\* waiting for a block to arrive
\/ /\ blockPool.receivedBlocks[h] = NilPeer
/\ blockPool.blockStore[h] = NilBlock
\* valid block is received and is present in the store
\/ /\ blockPool.receivedBlocks[h] /= NilPeer
/\ blockPool.blockStore[h] /= NilBlock
/\ blockPool.pendingBlocks[h] = NilPeer
(* a few simple properties that trigger counterexamples *)
\* Shows execution in which peer set is empty
PeerSetIsNeverEmpty == blockPool.peerIds /= AsPidSet({})
\* Shows execution in which state = "finished" and MaxPeerHeight is not equal to 1
StateNotFinished ==
state /= "finished" \/ MaxPeerHeight(blockPool) = 1
=============================================================================
\*=============================================================================
\* Modification History
\* Last modified Fri May 29 20:41:53 CEST 2020 by igor
\* Last modified Thu Apr 16 16:57:22 CEST 2020 by zarkomilosevic
\* Created Tue Feb 04 10:36:18 CET 2020 by zarkomilosevic

+ 0
- 606
rust-spec/fastsync/scheduler.tla View File

@ -1,606 +0,0 @@
------------------------------- MODULE scheduler -------------------------------
(*
A specification of the fast sync scheduler that is introduced in blockchain/v2:
https://github.com/tendermint/tendermint/tree/brapse/blockchain-v2-riri-reactor-2
The model includes:
- a scheduler that maintains the peers and blocks that it receives from the peers, and
- one environment simulating a correct peer
This specification focuses on the events that are received and produced by the scheduler.
Communication between the scheduler and the other fastsync components is not specified.
*)
EXTENDS Integers, FiniteSets
\* the protocol parameters
CONSTANTS
PeerIDs, \* potential peer ids, a set of integers, e.g. 0..2
ultimateHeight, \* the maximum height of the blockchain, an integer, e.g. 3
numRequests \* the maximum number of requests made when scheduling new blocks, e.g. 2
\* a few definitions
None == -1 \* an undefined value
Heights == 0..ultimateHeight \* potential heights
noErr == "errNone"
Errors == {
noErr, "errPeerNotFound", "errDelRemovedPeer", "errAddDuplicatePeer",
"errUpdateRemovedPeer", "errAfterPeerRemove", "errBadPeer", "errBadPeerState",
"errProcessedBlockEv", "finished", "timeout", "errAddRemovedPeer", "errPeerNoBlock", "errBadBlockState"}
PeerStates == {"peerStateUnknown", "peerStateNew", "peerStateReady", "peerStateRemoved"}
BlockStates == {
"blockStateUnknown", "blockStateNew", "blockStatePending",
"blockStateReceived", "blockStateProcessed"}
\* basic stuff
Min(a, b) == IF a < b THEN a ELSE b
Max(a, b) == IF a > b THEN a ELSE b
\* the state of the scheduler:
VARIABLE turn \* who makes a step: the scheduler or the environment
\* the state of the reactor:
VARIABLES inEvent, \* an event from the environment to the scheduler
envRunning \* a Boolean, negation of stopProcessing in the implementation
\* the state of the scheduler:
VARIABLES outEvent, \* an event from the scheduler to the environment
scRunning
\* the block pool:
VARIABLE scheduler
(*
scheduler is a record that contains:
height: Int,
height of the next block to collect
peers: PeerIDs,
the set of peers that have connected in the past, may include removed peers
peerHeights: [PeerIDs -> Heights],
a map to collect the peer heights, >0 if peer in ready state, None (-1) otherwise
peerStates: [PeerIDs -> PeerStates],
a map to record the peer states
blockStates: [Heights -> BlockStates]
a set of heights for which blocks are to be scheduled, pending or received
pendingBlocks: [Heights -> PeerIDs],
a set of heights for which blocks are to be scheduled or pending
receivedBlocks: [Heights -> PeerIDs],
a set of heights for which blocks were received but not yet processed
blocks: Heights,
the full set of blocks requested or downloaded by the scheduler
*)
vars == <<turn, envRunning, inEvent, scRunning, outEvent, scheduler>>
\* for now just keep the height in the block
Blocks == [ height: Heights ]
noEvent == [type |-> "NoEvent"]
InEvents ==
{noEvent} \cup
[type: {"rTrySchedule", "tNoAdvanceExp"}] \cup
[type: {"bcStatusResponse"}, peerID: PeerIDs, height: Heights] \cup
[type: {"bcBlockResponse"}, peerID: PeerIDs, height: Heights, block: Blocks] \cup
[type: {"bcNoBlockResponse"}, peerID: PeerIDs, height: Heights] \cup
[type: {"pcBlockProcessed"}, peerID: PeerIDs, height: Heights] \cup
[type: {"pcBlockVerificationFailure"}, height: Heights, firstPeerID: PeerIDs, secondPeerID: PeerIDs] \cup
[type: {"bcAddNewPeer"}, peerID: PeerIDs] \cup
[type: {"bcRemovePeer"}, peerID: PeerIDs]
\* Output events produced by the scheduler.
\* Note: in v2 the status request is done by the reactor/ environment
OutEvents ==
{noEvent} \cup
[type: {"scPeerError"}, peerID: PeerIDs, error: Errors] \cup
[type: {"scSchedulerFail"}, error: Errors] \cup
[type: {"scBlockRequest"}, peerID: PeerIDs, height: Heights] \cup
[type: {"scBlockReceived"}, peerID: PeerIDs, block: Blocks] \cup
[type: {"scPeersPruned"}, pruned: SUBSET [peerID: PeerIDs]] \cup
[type: {"scFinishedEv"}, error: Errors]
(* ----------------------------------------------------------------------------------------------*)
(* The behavior of the scheduler that keeps track of peers, block requests and responses, etc. *)
(* See scheduler.go *)
(* https://github.com/tendermint/tendermint/blob/v0.33.3/blockchain/v2/scheduler.go *)
(* ----------------------------------------------------------------------------------------------*)
addPeer(sc, peerID) ==
IF peerID \in sc.peers THEN
[err |-> "errAddDuplicatePeer", val |-> sc]
ELSE IF sc.peerStates[peerID] = "peerStateRemoved" THEN
[err |-> "errAddRemovedPeer", val |-> sc]
ELSE
LET newPeers == sc.peers \cup { peerID } IN
LET newPeerHeights == [sc.peerHeights EXCEPT ![peerID] = None] IN
LET newPeerStates == [sc.peerStates EXCEPT ![peerID] = "peerStateNew"] IN
LET newSc == [sc EXCEPT
!.peers = newPeers,
!.peerHeights = newPeerHeights,
!.peerStates = newPeerStates] IN
[err |-> noErr, val |-> newSc]
maxHeight(states, heights) ==
LET activePeers == {p \in DOMAIN states: states[p] = "peerStateReady"} IN
IF activePeers = {} THEN
0 \* no peers, just return 0
ELSE
CHOOSE max \in { heights[p] : p \in activePeers }:
\A p \in activePeers: heights[p] <= max \* max is the maximum
maxHeightScheduler(sc) ==
maxHeight(sc.peerStates, sc.peerHeights)
removePeer(sc, peerID) ==
IF peerID \notin sc.peers THEN
[err |-> "errPeerNotFound", val |-> sc]
ELSE IF sc.peerStates[peerID] = "peerStateRemoved" THEN
[err |-> "errDelRemovedPeer", val |-> sc]
ELSE
LET newSc == [sc EXCEPT
!.peerHeights[peerID] = None,
!.peerStates[peerID] = "peerStateRemoved",
\* remove all blocks from peerID and block requests to peerID, see scheduler.removePeer
!.blockStates = [h \in Heights |->
IF sc.pendingBlocks[h] = peerID \/ sc.receivedBlocks[h] = peerID THEN "blockStateNew"
ELSE sc.blockStates[h]],
!.pendingBlocks = [h \in Heights |-> IF sc.pendingBlocks[h] = peerID THEN None ELSE sc.pendingBlocks[h]],
!.receivedBlocks = [h \in Heights |-> IF sc.receivedBlocks[h] = peerID THEN None ELSE sc.receivedBlocks[h]]
] IN
[err |-> noErr, val |-> newSc]
addNewBlocks(sc, newMph) ==
\* add new blocks to be requested (e.g. when overall max peer height has changed)
IF Cardinality(sc.blocks) >= numRequests THEN
[newBlocks |-> sc.blocks, newBlockStates |-> sc.blockStates]
ELSE
LET requestSet == sc.height.. Min(numRequests+sc.height, newMph) IN
LET heightsToRequest == {h \in requestSet: sc.blockStates[h] = "blockStateUnknown"} IN
LET newBlockStates == [
h \in Heights |-> IF h \in heightsToRequest THEN "blockStateNew"
ELSE sc.blockStates[h]] IN
[newBlocks |-> heightsToRequest, newBlockStates |-> newBlockStates]
\* Update the peer height (the peer should have been previously added)
setPeerHeight(sc, peerID, height) ==
IF peerID \notin sc.peers THEN
[err |-> "errPeerNotFound", val |-> sc]
ELSE IF sc.peerStates[peerID] = "peerStateRemoved" THEN
[err |-> "errUpdateRemovedPeer", val |-> sc]
ELSE IF height < sc.peerHeights[peerID] THEN (* The peer is corrupt? Remove the peer. *)
removePeer(sc, peerID)
ELSE
LET newPeerHeights == [sc.peerHeights EXCEPT ![peerID] = height] IN \* set the peer's height
LET newPeerStates == [sc.peerStates EXCEPT ![peerID] = "peerStateReady"] IN \* set the peer's state
LET newMph == maxHeight(newPeerStates, newPeerHeights) IN
LET res == addNewBlocks(sc, newMph) IN
LET newSc == [sc EXCEPT
!.peerHeights = newPeerHeights, !.peerStates = newPeerStates,
!.blocks = res.newBlocks, !.blockStates = res.newBlockStates] IN
[err |-> noErr, val |-> newSc]
nextHeightToSchedule(sc) ==
LET toBeScheduled == {h \in DOMAIN sc.blockStates: sc.blockStates[h] = "blockStateNew"} \cup {ultimateHeight+1} IN
CHOOSE minH \in toBeScheduled: \A h \in toBeScheduled: h >= minH
getStateAtHeight(sc, h) ==
IF h < sc.height THEN
"blockStateProcessed"
ELSE IF h \in DOMAIN sc.blockStates THEN
sc.blockStates[h]
ELSE
"blockStateUnknown"
markPending(sc, peerID, h) ==
IF getStateAtHeight(sc, h) /= "blockStateNew" THEN
[err |-> "errBadBlockState", val |-> sc]
ELSE IF peerID \notin sc.peers \/ sc.peerStates[peerID] /= "peerStateReady" THEN
[err |-> "errBadPeerState", val |-> sc]
ELSE IF h > sc.peerHeights[peerID] THEN
[err |-> "errPeerTooShort", val |-> sc]
ELSE
LET newSc == [sc EXCEPT
!.blockStates = [sc.blockStates EXCEPT ![h] = "blockStatePending"],
!.pendingBlocks = [sc.pendingBlocks EXCEPT ![h] = peerID]] IN
[err |-> noErr, val |-> newSc]
markReceived(sc, peerID, h) ==
IF peerID \notin sc.peers \/ sc.peerStates[peerID] /= "peerStateReady" THEN
[err |-> "errBadPeerState", val |-> sc]
ELSE IF getStateAtHeight(sc, h) /= "blockStatePending" \/ sc.pendingBlocks[h] /= peerID THEN
[err |-> "errBadPeer", val |-> sc]
ELSE
LET newSc == [sc EXCEPT
!.blockStates = [sc.blockStates EXCEPT ![h] = "blockStateReceived"],
!.pendingBlocks = [sc.pendingBlocks EXCEPT ![h] = None],
!.receivedBlocks = [sc.receivedBlocks EXCEPT ![h] = peerID]] IN
[err |-> noErr, val |-> newSc]
markProcessed(sc, h) ==
IF getStateAtHeight(sc, h) /= "blockStateReceived" THEN
[err |-> "errProcessedBlockEv", val |-> sc]
ELSE
LET newSc == [sc EXCEPT
!.blockStates = [sc.blockStates EXCEPT ![h] = "blockStateProcessed"],
!.receivedBlocks = [sc.receivedBlocks EXCEPT ![h] = None],
!.height = sc.height + 1] IN
[err |-> noErr, val |-> newSc]
reachedMaxHeight(sc) ==
IF sc.peers = {} THEN
FALSE
ELSE
LET maxH == maxHeightScheduler(sc) IN
maxH > 0 /\ (sc.height >= maxH)
highPeers(sc, minH) == {p \in sc.peers: sc.peerHeights[p] >= minH}
(* ----------------------------------------------------------------------------------------------*)
(* The behavior of the scheduler state machine *)
(* See scheduler.go *)
(* https://github.com/tendermint/tendermint/tree/brapse/blockchain-v2-riri-reactor-2/scheduler.go*)
(* ----------------------------------------------------------------------------------------------*)
blStateInit(h, start) ==
IF h <= start THEN
"blockStateProcessed"
ELSE "blockStateUnknown"
InitSc ==
/\ scRunning = TRUE
/\ outEvent = noEvent
/\ \E startHeight \in Heights:
scheduler = [
initHeight |-> startHeight,
height |-> startHeight + 1,
peers |-> {},
peerHeights |-> [p \in PeerIDs |-> None],
peerStates |-> [p \in PeerIDs |-> "peerStateUnknown"],
blocks |-> {},
blockStates |-> [h \in Heights |-> blStateInit(h, startHeight)],
pendingBlocks |-> [h \in Heights |-> None],
receivedBlocks |-> [h \in Heights |-> None]
]
handleAddNewPeer ==
/\ inEvent.type = "bcAddNewPeer"
/\ LET res == addPeer(scheduler, inEvent.peerID) IN
IF res.err /= noErr THEN
/\ outEvent' = [type |-> "scSchedulerFail", error |-> res.err]
/\ UNCHANGED <<scheduler>>
ELSE
/\ scheduler' = res.val
/\ UNCHANGED outEvent
/\ UNCHANGED scRunning
finishSc(event) ==
event.type = "scFinishedEv" /\ event.error = "finished"
handleRemovePeer ==
/\ inEvent.type = "bcRemovePeer"
/\ LET res == removePeer(scheduler, inEvent.peerID) IN
IF res.err /= noErr THEN
/\ outEvent' = [type |-> "scSchedulerFail", error |-> res.err]
/\ UNCHANGED scheduler
ELSE
/\ scheduler' = res.val
/\ IF reachedMaxHeight(scheduler') THEN
outEvent' = [type |-> "scFinishedEv", error |-> "errAfterPeerRemove"]
ELSE
UNCHANGED outEvent
/\ IF finishSc(outEvent') THEN
scRunning' = FALSE
ELSE UNCHANGED scRunning
handleStatusResponse ==
/\ inEvent.type = "bcStatusResponse"
/\ LET res == setPeerHeight(scheduler, inEvent.peerID, inEvent.height) IN
IF res.err /= noErr THEN
/\ outEvent' = [type |-> "scPeerError", peerID |-> inEvent.peerID, error |-> res.err]
/\ UNCHANGED scheduler
ELSE
/\ scheduler' = res.val
/\ UNCHANGED outEvent
/\ UNCHANGED scRunning
handleTrySchedule == \* every 10 ms, but our spec is asynchronous
/\ inEvent.type = "rTrySchedule"
/\ LET minH == nextHeightToSchedule(scheduler) IN
IF minH = ultimateHeight+1 THEN
/\ outEvent' = noEvent
/\ UNCHANGED scheduler
ELSE IF minH = ultimateHeight+1 THEN
/\ outEvent' = noEvent
/\ UNCHANGED scheduler
ELSE
/\ LET hp == highPeers(scheduler, minH) IN
IF hp = {} THEN
/\ outEvent' = noEvent
/\ UNCHANGED scheduler
ELSE \E bestPeerID \in hp:
/\ LET res == markPending(scheduler, bestPeerID, minH) IN
/\ IF res.err /= noErr THEN
outEvent' = [type |-> "scSchedulerFail", error|-> res.err]
ELSE
outEvent' = [type |-> "scBlockRequest", peerID |-> bestPeerID, height |-> minH]
/\ scheduler' = res.val
/\ UNCHANGED scRunning
handleBlockResponse ==
/\ inEvent.type = "bcBlockResponse"
/\ LET res == markReceived(scheduler, inEvent.peerID, inEvent.height) IN
IF res.err /= noErr THEN
LET res1 == removePeer(scheduler, inEvent.peerID) IN
/\ outEvent' = [type |-> "scPeerError", peerID |-> inEvent.peerID, error |-> res.err]
/\ scheduler' = res1.val
ELSE
/\ outEvent' = [type |-> "scBlockReceived", peerID |-> inEvent.peerID, block |-> inEvent.block]
/\ scheduler' = res.val
/\ UNCHANGED scRunning
handleNoBlockResponse ==
/\ inEvent.type = "bcNoBlockResponse"
/\ IF (scheduler.peers = {} \/ scheduler.peerStates[inEvent.peerID] = "peerStateRemoved") THEN
/\ outEvent' = noEvent
/\ UNCHANGED scheduler
ELSE
LET res == removePeer(scheduler, inEvent.peerID) IN
/\ outEvent' = [type |-> "scPeerError", peerID |-> inEvent.peerID, error |-> "errPeerNoBlock"]
/\ scheduler' = res.val
/\ UNCHANGED scRunning
handleBlockProcessed ==
/\ inEvent.type = "pcBlockProcessed"
/\ IF inEvent.height /= scheduler.height THEN
/\ outEvent' = [type |-> "scSchedulerFail", error |-> "errProcessedBlockEv"]
/\ UNCHANGED scheduler
ELSE
LET res == markProcessed(scheduler, inEvent.height) IN
IF res.err /= noErr THEN
/\ outEvent' = [type |-> "scSchedulerFail", error |-> res.err]
/\ UNCHANGED scheduler
ELSE
/\ scheduler' = res.val
/\ IF reachedMaxHeight(scheduler') THEN
outEvent' = [type |-> "scFinishedEv", error |-> "finished"]
ELSE
outEvent' = noEvent
/\ IF finishSc(outEvent') THEN
scRunning' = FALSE
ELSE UNCHANGED scRunning
handleBlockProcessError ==
/\ inEvent.type = "pcBlockVerificationFailure"
/\ IF scheduler.peers = {} THEN
/\ outEvent' = noEvent
/\ UNCHANGED scheduler
ELSE
LET res1 == removePeer(scheduler, inEvent.firstPeerID) IN
LET res2 == removePeer(res1.val, inEvent.secondPeerID) IN
/\ IF reachedMaxHeight(res2.val) THEN
outEvent' = [type |-> "scFinishedEv", error |-> "finished"]
ELSE
outEvent' = noEvent
/\ scheduler' = res2.val
/\ IF finishSc(outEvent') THEN
scRunning' = FALSE
ELSE UNCHANGED scRunning
handleNoAdvanceExp ==
/\ inEvent.type = "tNoAdvanceExp"
/\ outEvent' = [type |-> "scFinishedEv", error |-> "timeout"]
/\ scRunning' = FALSE
/\ UNCHANGED <<scheduler>>
NextSc ==
IF ~scRunning THEN
UNCHANGED <<outEvent, scRunning, scheduler>>
ELSE
\/ handleStatusResponse
\/ handleAddNewPeer
\/ handleRemovePeer
\/ handleTrySchedule
\/ handleBlockResponse
\/ handleNoBlockResponse
\/ handleBlockProcessed
\/ handleBlockProcessError
\/ handleNoAdvanceExp
(* ----------------------------------------------------------------------------------------------*)
(* The behavior of the environment. *)
(* ----------------------------------------------------------------------------------------------*)
InitEnv ==
/\ inEvent = noEvent
/\ envRunning = TRUE
OnGlobalTimeoutTicker ==
/\ inEvent' = [type |-> "tNoAdvanceExp"]
/\ envRunning' = FALSE
OnTrySchedule ==
/\ inEvent' = [type |-> "rTrySchedule"]
/\ UNCHANGED envRunning
OnAddPeerEv ==
/\ inEvent' \in [type: {"bcAddNewPeer"}, peerID: PeerIDs]
/\ UNCHANGED envRunning
OnStatusResponseEv ==
\* any status response can come from the blockchain, pick one non-deterministically
/\ inEvent' \in [type: {"bcStatusResponse"}, peerID: PeerIDs, height: Heights]
/\ UNCHANGED envRunning
OnBlockResponseEv ==
\* any block response can come from the blockchain, pick one non-deterministically
/\ inEvent' \in [type: {"bcBlockResponse"}, peerID: PeerIDs, height: Heights, block: Blocks]
/\ UNCHANGED envRunning
OnNoBlockResponseEv ==
\* any no block response can come from the blockchain, pick one non-deterministically
/\ inEvent' \in [type: {"bcNoBlockResponse"}, peerID: PeerIDs, height: Heights]
/\ UNCHANGED envRunning
OnRemovePeerEv ==
\* although bcRemovePeer admits an arbitrary set, we produce just a singleton
/\ inEvent' \in [type: {"bcRemovePeer"}, peerID: PeerIDs]
/\ UNCHANGED envRunning
OnPcBlockProcessed ==
/\ inEvent' \in [type: {"pcBlockProcessed"}, peerID: PeerIDs, height: Heights]
/\ UNCHANGED envRunning
OnPcBlockVerificationFailure ==
/\ inEvent' \in [type: {"pcBlockVerificationFailure"}, firstPeerID: PeerIDs, secondPeerID: PeerIDs, height: Heights]
/\ UNCHANGED envRunning
\* messages from scheduler
OnScFinishedEv ==
/\ outEvent.type = "scFinishedEv"
/\ envRunning' = FALSE \* stop the env
/\ UNCHANGED inEvent
NextEnv ==
IF ~envRunning THEN
UNCHANGED <<inEvent, envRunning>>
ELSE
\/ OnScFinishedEv
\/ OnGlobalTimeoutTicker
\/ OnAddPeerEv
\/ OnTrySchedule
\/ OnStatusResponseEv
\/ OnBlockResponseEv
\/ OnNoBlockResponseEv
\/ OnRemovePeerEv
\/ OnPcBlockProcessed
\/ OnPcBlockVerificationFailure
(* ----------------------------------------------------------------------------------------------*)
(* The system is the composition of the environment and the schedule *)
(* ----------------------------------------------------------------------------------------------*)
Init == turn = "environment" /\ InitEnv /\ InitSc
FlipTurn ==
turn' = (
IF turn = "scheduler" THEN
"environment"
ELSE
"scheduler"
)
\* scheduler and environment alternate their steps (synchronous composition introduces more states)
Next ==
/\ FlipTurn
/\ IF turn = "scheduler" THEN
/\ NextSc
/\ inEvent' = noEvent
/\ UNCHANGED envRunning
ELSE
/\ NextEnv
/\ outEvent' = noEvent
/\ UNCHANGED <<scRunning, scheduler>>
Spec == Init /\ [][Next]_vars /\ WF_turn(FlipTurn)
(* ----------------------------------------------------------------------------------------------*)
(* Invariants *)
(* ----------------------------------------------------------------------------------------------*)
TypeOK ==
/\ turn \in {"scheduler", "environment"}
/\ inEvent \in InEvents
/\ envRunning \in BOOLEAN
/\ outEvent \in OutEvents
/\ scheduler \in [
initHeight: Heights,
height: Heights \cup {ultimateHeight + 1},
peers: SUBSET PeerIDs,
peerHeights: [PeerIDs -> Heights \cup {None}],
peerStates: [PeerIDs -> PeerStates],
blocks: SUBSET Heights,
blockStates: [Heights -> BlockStates],
pendingBlocks: [Heights -> PeerIDs \cup {None}],
receivedBlocks: [Heights -> PeerIDs \cup {None}]
]
(* ----------------------------------------------------------------------------------------------*)
(* Helpers for Properties *)
(* ----------------------------------------------------------------------------------------------*)
NoFailuresAndTimeouts ==
/\ inEvent.type /= "bcRemovePeer"
/\ inEvent.type /= "bcNoBlockResponse"
/\ inEvent.type /= "pcBlockVerificationFailure"
\* simulate good peer behavior using this formula. Useful to show termination in the presence of good peers.
GoodResponse ==
\/ inEvent.type
\in {"bcAddNewPeer", "bcStatusResponse", "bcBlockResponse", "pcBlockProcessed", "rTrySchedule"}
\/ ~envRunning
\* all blocks from initHeight up to max peer height have been processed
AllRequiredBlocksProcessed ==
LET maxH == Max(scheduler.height, maxHeightScheduler(scheduler)) IN
LET processedBlocks == {h \in scheduler.initHeight.. maxH-1: scheduler.blockStates[h] = "blockStateProcessed"} IN
scheduler.height >= maxH /\ Cardinality(processedBlocks) = scheduler.height - scheduler.initHeight
IncreaseHeight ==
(scheduler'.height > scheduler.height) \/ (scheduler.height >= ultimateHeight)
(* ----------------------------------------------------------------------------------------------*)
(* Expected properties *)
(* ----------------------------------------------------------------------------------------------*)
(* *)
(* 1. TerminationWhenNoAdvance - termination if there are no correct peers. *)
(* The model assumes the "noAdvance" timer expires and the "tNoAdvanceExp" event is received *)
(* *)
TerminationWhenNoAdvance ==
(inEvent.type = "tNoAdvanceExp")
=> <>(outEvent.type = "scFinishedEv" /\ outEvent.error = "timeout")
(* ----------------------------------------------------------------------------------------------*)
(* *)
(* 2. TerminationGoodPeers - *)
(* termination when IncreaseHeight holds true, fastsync is progressing, all blocks processed *)
(* *)
TerminationGoodPeers ==
(/\ scheduler.height < ultimateHeight
/\ <>[]GoodResponse
/\[]<>(<<IncreaseHeight>>_<<scheduler, turn>>)
)
=> <>(outEvent.type = "scFinishedEv" /\ AllRequiredBlocksProcessed)
(* This property is violated. It shows that the precondition of TerminationGoodPeers is not *)
(* always FALSE *)
TerminationGoodPeersPre ==
(/\ scheduler.height < ultimateHeight
/\ <>[]GoodResponse
/\[]<>(<<IncreaseHeight>>_<<scheduler, turn>>)
)
=> FALSE
(* ----------------------------------------------------------------------------------------------*)
(* 3. TerminationAllCases - *)
(* any peer behavior, either terminates with all blocks processed or times out *)
TerminationAllCases ==
(/\ scheduler.height < ultimateHeight
/\(([]<> (<<IncreaseHeight>>_<<scheduler, turn>>)) \/ <>(inEvent.type = "tNoAdvanceExp"))
)
=> <>(outEvent.type = "scFinishedEv" /\ (AllRequiredBlocksProcessed \/ outEvent.error = "timeout"))
(* This property is violated. It shows that the precondition of TerminationAllCases is not *)
(* always FALSE *)
TerminationAllCasesPre ==
(/\ scheduler.height < ultimateHeight
/\(([]<> (<<IncreaseHeight>>_<<scheduler, turn>>)) \/ <>(inEvent.type = "tNoAdvanceExp"))
)
=> FALSE
(* This property is violated. TLC output shows an example of increasing heights in the scheduler *)
SchedulerIncreasePre ==
[]<>(<<IncreaseHeight>>_<<scheduler, turn>>)
=> FALSE
=============================================================================
\* Modification History
\* Last modified Thu Apr 16 13:21:33 CEST 2020 by ancaz
\* Created Sat Feb 08 13:12:30 CET 2020 by ancaz

+ 0
- 13
rust-spec/fastsync/verification/001bmc-apalache.csv View File

@ -1,13 +0,0 @@
no,filename,tool,timeout,init,inv,next,args
1,MC_1_1_4.tla,apalache,1h,,Sync1AsInv,,--length=20
2,MC_1_1_4.tla,apalache,1h,,Sync2AsInv,,--length=20
3,MC_1_1_4.tla,apalache,4h,,Sync3AsInv,,--length=20
4,MC_1_1_4.tla,apalache,8h,,CorrectBlocksInv,,--length=16
5,MC_1_1_4.tla,apalache,1h,,SyncFromCorrectInv,,--length=20
6,MC_1_1_4.tla,apalache,1h,,CorrectNeverSuspectedInv,,--length=20
7,MC_1_2_4.tla,apalache,1h,,Sync1AsInv,,--length=20
8,MC_1_2_4.tla,apalache,1h,,Sync2AsInv,,--length=20
9,MC_1_2_4.tla,apalache,4h,,Sync3AsInv,,--length=20
10,MC_1_2_4.tla,apalache,8h,,CorrectBlocksInv,,--length=16
11,MC_1_2_4.tla,apalache,1h,,SyncFromCorrectInv,,--length=20
12,MC_1_2_4.tla,apalache,1h,,CorrectNeverSuspectedInv,,--length=20

+ 0
- 9
rust-spec/fastsync/verification/002tlc-tlc.csv View File

@ -1,9 +0,0 @@
no,filename,tool,timeout,init,inv,next,args
1,MC_1_1_4.tla,tlc,1h,,,,-workers 4 -config MC-Sync1AsInv.cfg
2,MC_1_1_4.tla,tlc,1h,,,,-workers 4 -config MC-Sync2AsInv.cfg
3,MC_1_1_4.tla,tlc,24h,,,,-workers 4 -config MC-Sync3AsInv.cfg
4,MC_1_1_4.tla,tlc,2h,,Termination,,-workers 4 -config MC-Termination.cfg
5,MC_1_1_4.tla,tlc,24h,,TerminationByTO,,-workers 4 -config MC-TerminationByTO.cfg
6,MC_1_1_4.tla,tlc,24h,,,,-workers 4 -config MC-CorrectBlocksInv.cfg
7,MC_1_1_4.tla,tlc,1h,,,,-workers 4 -config MC-SyncFromCorrectInv.cfg
8,MC_1_1_4.tla,tlc,1h,,,,-workers 4 -config MC-CorrectNeverSuspectedInv.cfg

+ 0
- 10
rust-spec/fastsync/verification/MC-CorrectBlocksInv.cfg View File

@ -1,10 +0,0 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
CorrectBlocksInv
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 0
- 10
rust-spec/fastsync/verification/MC-CorrectNeverSuspectedInv.cfg View File

@ -1,10 +0,0 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
CorrectNeverSuspectedInv
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 0
- 10
rust-spec/fastsync/verification/MC-Sync1AsInv.cfg View File

@ -1,10 +0,0 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
Sync1AsInv
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 0
- 10
rust-spec/fastsync/verification/MC-Sync2AsInv.cfg View File

@ -1,10 +0,0 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
Sync2AsInv
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 0
- 10
rust-spec/fastsync/verification/MC-Sync3AsInv.cfg View File

@ -1,10 +0,0 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
Sync3AsInv
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 0
- 10
rust-spec/fastsync/verification/MC-SyncFromCorrectInv.cfg View File

@ -1,10 +0,0 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
SyncFromCorrectInv
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 0
- 10
rust-spec/fastsync/verification/MC-Termination.cfg View File

@ -1,10 +0,0 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* PROPERTY definition
PROPERTY
Termination
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 0
- 10
rust-spec/fastsync/verification/MC-TerminationByTO.cfg View File

@ -1,10 +0,0 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* PROPERTY definition
PROPERTY
TerminationByTO
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 0
- 10
rust-spec/fastsync/verification/MC-TerminationCorr.cfg View File

@ -1,10 +0,0 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* PROPERTY definition
PROPERTY
TerminationCorr
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 0
- 17
rust-spec/fastsync/verification/MC_1_0_4.tla View File

@ -1,17 +0,0 @@
--------------------------- MODULE MC_1_0_4 ------------------------------------
\*a <: b == a \* type annotation
VALIDATOR_SETS == {"vs1", "vs2"}
NIL_VS == "NilVS"
CORRECT == {"c1"}
FAULTY == {"f2"} \ {"f2"}
MAX_HEIGHT == 4
PEER_MAX_REQUESTS == 2
TARGET_PENDING == 3
VARIABLES
state, blockPool, peersState, chain, turn, inMsg, outMsg
INSTANCE fastsync_apalache
================================================================================

+ 0
- 15
rust-spec/fastsync/verification/MC_1_1_4.tla View File

@ -1,15 +0,0 @@
--------------------------- MODULE MC_1_1_4 ------------------------------------
VALIDATOR_SETS == {"vs1", "vs2"}
NIL_VS == "NilVS"
CORRECT == {"c1"}
FAULTY == {"f2"}
MAX_HEIGHT == 4
PEER_MAX_REQUESTS == 2
TARGET_PENDING == 3
VARIABLES
state, blockPool, peersState, chain, turn, inMsg, outMsg
INSTANCE fastsync_apalache
================================================================================

+ 0
- 15
rust-spec/fastsync/verification/MC_1_2_4.tla View File

@ -1,15 +0,0 @@
-------------------------- MODULE MC_1_2_4 ------------------------------------
VALIDATOR_SETS == {"vs1", "vs2"}
NIL_VS == "NilVS"
CORRECT == {"c1"}
FAULTY == {"f2", "f3"}
MAX_HEIGHT == 4
PEER_MAX_REQUESTS == 2
TARGET_PENDING == 3
VARIABLES
state, blockPool, peersState, chain, turn, inMsg, outMsg
INSTANCE fastsync_apalache
===============================================================================

+ 0
- 15
rust-spec/fastsync/verification/MC_1_2_5.tla View File

@ -1,15 +0,0 @@
-------------------------- MODULE MC_1_2_5 ------------------------------------
VALIDATOR_SETS == {"vs1", "vs2"}
NIL_VS == "NilVS"
CORRECT == {"c1"}
FAULTY == {"f2", "f3"}
MAX_HEIGHT == 5
PEER_MAX_REQUESTS == 2
TARGET_PENDING == 3
VARIABLES
state, blockPool, peersState, chain, turn, inMsg, outMsg
INSTANCE fastsync_apalache
===============================================================================

+ 0
- 15
rust-spec/fastsync/verification/MC_1_3_5.tla View File

@ -1,15 +0,0 @@
------------------------------- MODULE MC_1_3_5 ------------------------------------
MAX_HEIGHT == 5
VALIDATOR_SETS == {"vs1", "vs2"}
NIL_VS == "NilVS"
CORRECT == {"c1"}
FAULTY == {"f2", "f3", "f4"}
PEER_MAX_REQUESTS == 2
TARGET_PENDING == 3
VARIABLES
state, blockPool, peersState, chain, turn, inMsg, outMsg
INSTANCE fastsync_apalache
========================================================================================

+ 0
- 17
rust-spec/fastsync/verification/MC_2_0_4.tla View File

@ -1,17 +0,0 @@
--------------------------- MODULE MC_2_0_4 ------------------------------------
a <: b == a \* type annotation
VALIDATOR_SETS == {"vs1", "vs2"}
NIL_VS == "NilVS"
CORRECT == {"c1", "c2"}
FAULTY == {} <: {STRING}
MAX_HEIGHT == 4
PEER_MAX_REQUESTS == 2
TARGET_PENDING == 3
VARIABLES
state, blockPool, peersState, chain, turn, inMsg, outMsg
INSTANCE fastsync_apalache
================================================================================

+ 0
- 110
rust-spec/fastsync/verification/Tinychain.tla View File

@ -1,110 +0,0 @@
-------------------------- MODULE Tinychain ----------------------------------
(* A very abstract model of Tendermint blockchain. Its only purpose is to highlight
the relation between validator sets, next validator sets, and last commits.
*)
EXTENDS Integers
\* type annotation
a <: b == a
\* the type of validator sets, e.g., STRING
VST == STRING
\* LastCommit type.
\* It contains:
\* 1) the flag of whether the block id equals to the hash of the previous block, and
\* 2) the set of the validators who have committed the block.
\* In the implementation, blockId is the hash of the previous block, which cannot be forged.
\* We abstract block id into whether it equals to the hash of the previous block or not.
LCT == [blockIdEqRef |-> BOOLEAN, committers |-> VST]
\* Block type.
\* A block contains its height, validator set, next validator set, and last commit.
\* Moreover, it contains the flag that tells us whether the block equals to the one
\* on the reference chain (this is an abstraction of hash).
BT == [height |-> Int, hashEqRef |-> BOOLEAN, wellFormed |-> BOOLEAN,
VS |-> VST, NextVS |-> VST, lastCommit |-> LCT]
CONSTANTS
(*
A set of abstract values, each value representing a set of validators.
For the purposes of this specification, they can be any values,
e.g., "s1", "s2", etc.
*)
VALIDATOR_SETS,
(* a nil validator set that is outside of VALIDATOR_SETS *)
NIL_VS,
(* The maximal height, up to which the blockchain may grow *)
MAX_HEIGHT
Heights == 1..MAX_HEIGHT
\* the set of all possible commits
Commits == [blockIdEqRef: BOOLEAN, committers: VALIDATOR_SETS]
\* the set of all possible blocks, not necessarily valid ones
Blocks ==
[height: Heights, hashEqRef: BOOLEAN, wellFormed: BOOLEAN,
VS: VALIDATOR_SETS, NextVS: VALIDATOR_SETS, lastCommit: Commits]
\* Does the chain contain a sound sequence of blocks that could be produced by
\* a 2/3 of faulty validators. This operator can be used to initialise the chain!
IsCorrectChain(chain) ==
\* restrict the structure of the blocks, to decrease the TLC search space
LET OkCommits == [blockIdEqRef: {TRUE}, committers: VALIDATOR_SETS]
OkBlocks == [height: Heights, hashEqRef: {TRUE}, wellFormed: {TRUE},
VS: VALIDATOR_SETS, NextVS: VALIDATOR_SETS, lastCommit: OkCommits]
IN
/\ chain \in [1..MAX_HEIGHT -> OkBlocks]
/\ \A h \in 1..MAX_HEIGHT:
LET b == chain[h] IN
/\ b.height = h \* the height is correct
/\ h > 1 =>
LET p == chain[h - 1] IN
/\ b.VS = p.NextVS \* the validators propagate from the previous block
/\ b.lastCommit.committers = p.VS \* and they are the committers
\* The basic properties of blocks on the blockchain:
\* They should pass the validity check and they may verify the next block.
\* Does the block pass the consistency check against the next validators of the previous block
IsMatchingValidators(block, nextVS) ==
\* simply check that the validator set is propagated correctly.
\* (the implementation tests hashes and the application state)
block.VS = nextVS
\* Does the block verify the commit (of the next block)
PossibleCommit(block, commit) ==
\* the commits are signed by the block validators
/\ commit.committers = block.VS
\* The block id in the commit matches the block hash (abstract comparison).
\* (The implementation has extensive tests for that.)
\* this is an abstraction of: commit.blockId = hash(block)
\*
\* These are possible scenarios on the concrete hashes:
\*
\* scenario 1: commit.blockId = 10 /\ hash(block) = 10 /\ hash(ref) = 10
\* scenario 2: commit.blockId = 20 /\ hash(block) = 20 /\ block.VS /= ref.VS
\* scenario 3: commit.blockId = 50 /\ hash(block) = 100
\* scenario 4: commit.blockId = 10 /\ hash(block) = 100
\* scenario 5: commit.blockId = 100 /\ hash(block) = 10
/\ commit.blockIdEqRef = block.hashEqRef
\* the following test would be cheating, as we do not have access to the
\* reference chain:
\* /\ commit.blockIdEqRef
\* Basic invariants
\* every block has the validator set that is chosen by its predecessor
ValidBlockInv(chain) ==
\A h \in 2..MAX_HEIGHT:
IsMatchingValidators(chain[h], chain[h - 1].NextVS)
\* last commit of every block is signed by the validators of the predecessor
VerifiedBlockInv(chain) ==
\A h \in 2..MAX_HEIGHT:
PossibleCommit(chain[h - 1], chain[h].lastCommit)
==================================================================================

+ 0
- 822
rust-spec/fastsync/verification/fastsync_apalache.tla View File

@ -1,822 +0,0 @@
----------------------------- MODULE fastsync_apalache -----------------------------
(*
In this document we give the high level specification of the fast sync
protocol as implemented here:
https://github.com/tendermint/tendermint/tree/master/blockchain/v2.
We assume a system in which one node is trying to sync with the blockchain
(replicated state machine) by downloading blocks from the set of full nodes
(we call them peers) that are block providers, and executing transactions
(part of the block) against the application.
Peers can be faulty, and we don't make any assumption about the rate of correct/faulty
nodes in the node peerset (i.e., they can all be faulty). Correct peers are part
of the replicated state machine, i.e., they manage blockchain and execute
transactions against the same deterministic application. We don't make any
assumptions about the behavior of faulty processes. Processes (client and peers)
communicate by message passing.
In this specification, we model this system with two parties:
- the node (state machine) that is doing fastsync and
- the environment with which node interacts.
The environment consists of the set of (correct and faulty) peers with
which node interacts as part of fast sync protocol, but also contains some
aspects (adding/removing peers, timeout mechanism) that are part of the node
local environment (could be seen as part of the runtime in which node
executes).
As part of the fast sync protocol a node and the peers exchange the following messages:
- StatusRequest
- StatusResponse
- BlockRequest
- BlockResponse
- NoBlockResponse.
A node is periodically issuing StatusRequests to query peers for their current height (to decide what
blocks to ask from what peers). Based on StatusResponses (that are sent by peers), the node queries
blocks for some height(s) by sending peers BlockRequest messages. A peer provides a requested block by
BlockResponse message. If a peer does not want to provide a requested block, then it sends NoBlockResponse message.
In addition to those messages, a node in this spec receives additional input messages (events):
- AddPeer
- RemovePeer
- SyncTimeout.
These are the control messages that are provided to the node by its execution enviornment. AddPeer
is for the case when a connection is established with a peer; similarly RemovePeer is for the case
a connection with the peer is terminated. Finally SyncTimeout is used to model a timeout trigger.
We assume that fast sync protocol starts when connections with some number of peers
are established. Therefore, peer set is initialised with non-empty set of peer ids. Note however
that node does not know initially the peer heights.
*)
EXTENDS Integers, FiniteSets, Sequences
CONSTANTS MAX_HEIGHT, \* the maximal height of blockchain
VALIDATOR_SETS, \* abstract set of validators
NIL_VS, \* a nil validator set
CORRECT, \* set of correct peers
FAULTY, \* set of faulty peers
TARGET_PENDING, \* maximum number of pending requests + downloaded blocks that are not yet processed
PEER_MAX_REQUESTS \* maximum number of pending requests per peer
ASSUME CORRECT \intersect FAULTY = {}
ASSUME TARGET_PENDING > 0
ASSUME PEER_MAX_REQUESTS > 0
\* the blockchain, see Tinychain
VARIABLE chain
\* introduce tiny chain as the source of blocks for the correct nodes
INSTANCE Tinychain
\* a special value for an undefined height
NilHeight == 0
\* the height of the genesis block
TrustedHeight == 1
\* the set of all peer ids the node can receive a message from
AllPeerIds == CORRECT \union FAULTY
\* Correct last commit have enough voting power, i.e., +2/3 of the voting power of
\* the corresponding validator set (enoughVotingPower = TRUE) that signs blockId.
\* BlockId defines correct previous block (in the implementation it is the hash of the block).
\* Instead of blockId, we encode blockIdEqRef, which is true, if the block id is equal
\* to the hash of the previous block, see Tinychain.
CorrectLastCommit(h) == chain[h].lastCommit
NilCommit == [blockIdEqRef |-> FALSE, committers |-> NIL_VS]
\* correct node always supplies the blocks from the blockchain
CorrectBlock(h) == chain[h]
NilBlock ==
[height |-> 0, hashEqRef |-> FALSE, wellFormed |-> FALSE,
lastCommit |-> NilCommit, VS |-> NIL_VS, NextVS |-> NIL_VS]
\* a special value for an undefined peer
NilPeer == "Nil" \* STRING for apalache efficiency
\* control the state of the syncing node
States == { "running", "finished"}
NoMsg == [type |-> "None"]
\* the variables of the node running fastsync
VARIABLES
state, \* running or finished
(*
blockPool [
height, \* current height we are trying to sync. Last block executed is height - 1
peerIds, \* set of peers node is connected to
peerHeights, \* map of peer ids to its (stated) height
blockStore, \* map of heights to (received) blocks
receivedBlocks, \* map of heights to peer that has sent us the block (stored in blockStore)
pendingBlocks, \* map of heights to peer to which block request has been sent
syncHeight, \* height at the point syncTimeout was triggered last time
syncedBlocks \* number of blocks synced since last syncTimeout. If it is 0 when the next timeout occurs, then protocol terminates.
]
*)
blockPool
\* the variables of the peers providing blocks
VARIABLES
(*
peersState [
peerHeights, \* track peer heights
statusRequested, \* boolean set to true when StatusRequest is received. Models periodic sending of StatusRequests.
blocksRequested \* set of BlockRequests received that are not answered yet
]
*)
peersState
\* the variables for the network and scheduler
VARIABLES
turn, \* who is taking the turn: "Peers" or "Node"
inMsg, \* a node receives message by this variable
outMsg \* a node sends a message by this variable
(* the variables of the node *)
nvars == <<state, blockPool>>
(*************** Type definitions for Apalache (model checker) **********************)
AsIntSet(S) == S <: {Int}
\* type of process ids
PIDT == STRING
AsPidSet(S) == S <: {PIDT}
\* ControlMessage type
CMT == [type |-> STRING, peerId |-> PIDT] \* type of control messages
\* InMsg type
IMT == [type |-> STRING, peerId |-> PIDT, height |-> Int, block |-> BT]
AsInMsg(m) == m <: IMT
AsInMsgSet(S) == S <: {IMT}
\* OutMsg type
OMT == [type |-> STRING, peerId |-> PIDT, height |-> Int]
AsOutMsg(m) == m <: OMT
AsOutMsgSet(S) == S <: {OMT}
\* block pool type
BPT == [height |-> Int, peerIds |-> {PIDT}, peerHeights |-> [PIDT -> Int],
blockStore |-> [Int -> BT], receivedBlocks |-> [Int -> PIDT],
pendingBlocks |-> [Int -> PIDT], syncedBlocks |-> Int, syncHeight |-> Int]
AsBlockPool(bp) == bp <: BPT
(******************** Sets of messages ********************************)
\* Control messages
ControlMsgs ==
AsInMsgSet([type: {"addPeer"}, peerId: AllPeerIds])
\union
AsInMsgSet([type: {"removePeer"}, peerId: AllPeerIds])
\union
AsInMsgSet([type: {"syncTimeout"}])
\* All messages (and events) received by a node
InMsgs ==
AsInMsgSet({NoMsg})
\union
AsInMsgSet([type: {"blockResponse"}, peerId: AllPeerIds, block: Blocks])
\union
AsInMsgSet([type: {"noBlockResponse"}, peerId: AllPeerIds, height: Heights])
\union
AsInMsgSet([type: {"statusResponse"}, peerId: AllPeerIds, height: Heights])
\union
ControlMsgs
\* Messages sent by a node and received by peers (environment in our case)
OutMsgs ==
AsOutMsgSet({NoMsg})
\union
AsOutMsgSet([type: {"statusRequest"}]) \* StatusRequest is broadcast to the set of connected peers.
\union
AsOutMsgSet([type: {"blockRequest"}, peerId: AllPeerIds, height: Heights])
(********************************** NODE ***********************************)
InitNode ==
\E pIds \in SUBSET AllPeerIds: \* set of peers node established initial connections with
/\ pIds \subseteq CORRECT
/\ pIds /= AsPidSet({}) \* apalache better checks non-emptiness than subtracts from SUBSET
/\ blockPool = AsBlockPool([
height |-> TrustedHeight + 1, \* the genesis block is at height 1
syncHeight |-> TrustedHeight + 1, \* and we are synchronized to it
peerIds |-> pIds,
peerHeights |-> [p \in AllPeerIds |-> NilHeight], \* no peer height is known
blockStore |->
[h \in Heights |->
IF h > TrustedHeight THEN NilBlock ELSE chain[1]],
receivedBlocks |-> [h \in Heights |-> NilPeer],
pendingBlocks |-> [h \in Heights |-> NilPeer],
syncedBlocks |-> -1
])
/\ state = "running"
\* Remove faulty peers.
\* Returns new block pool.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L222
RemovePeers(rmPeers, bPool) ==
LET keepPeers == bPool.peerIds \ rmPeers IN
LET pHeights ==
[p \in AllPeerIds |-> IF p \in rmPeers THEN NilHeight ELSE bPool.peerHeights[p]] IN
LET failedRequests ==
{h \in Heights: /\ h >= bPool.height
/\ \/ bPool.pendingBlocks[h] \in rmPeers
\/ bPool.receivedBlocks[h] \in rmPeers} IN
LET pBlocks ==
[h \in Heights |-> IF h \in failedRequests THEN NilPeer ELSE bPool.pendingBlocks[h]] IN
LET rBlocks ==
[h \in Heights |-> IF h \in failedRequests THEN NilPeer ELSE bPool.receivedBlocks[h]] IN
LET bStore ==
[h \in Heights |-> IF h \in failedRequests THEN NilBlock ELSE bPool.blockStore[h]] IN
IF keepPeers /= bPool.peerIds
THEN [bPool EXCEPT
!.peerIds = keepPeers,
!.peerHeights = pHeights,
!.pendingBlocks = pBlocks,
!.receivedBlocks = rBlocks,
!.blockStore = bStore
]
ELSE bPool
\* Add a peer.
\* see https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L198
AddPeer(peer, bPool) ==
[bPool EXCEPT !.peerIds = bPool.peerIds \union {peer}]
(*
Handle StatusResponse message.
If valid status response, update peerHeights.
If invalid (height is smaller than the current), then remove peer.
Returns new block pool.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L667
*)
HandleStatusResponse(msg, bPool) ==
LET peerHeight == bPool.peerHeights[msg.peerId] IN
IF /\ msg.peerId \in bPool.peerIds
/\ msg.height >= peerHeight
THEN \* a correct response
LET pHeights == [bPool.peerHeights EXCEPT ![msg.peerId] = msg.height] IN
[bPool EXCEPT !.peerHeights = pHeights]
ELSE RemovePeers({msg.peerId}, bPool) \* the peer has sent us message with smaller height or peer is not in our peer list
(*
Handle BlockResponse message.
If valid block response, update blockStore, pendingBlocks and receivedBlocks.
If invalid (unsolicited response or malformed block), then remove peer.
Returns new block pool.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L522
*)
HandleBlockResponse(msg, bPool) ==
LET h == msg.block.height IN
IF /\ msg.peerId \in bPool.peerIds
/\ bPool.blockStore[h] = NilBlock
/\ bPool.pendingBlocks[h] = msg.peerId
/\ msg.block.wellFormed
THEN
[bPool EXCEPT
!.blockStore = [bPool.blockStore EXCEPT ![h] = msg.block],
!.receivedBlocks = [bPool.receivedBlocks EXCEPT![h] = msg.peerId],
!.pendingBlocks = [bPool.pendingBlocks EXCEPT![h] = NilPeer]
]
ELSE RemovePeers({msg.peerId}, bPool)
HandleNoBlockResponse(msg, bPool) ==
RemovePeers({msg.peerId}, bPool)
\* Compute max peer height.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L440
MaxPeerHeight(bPool) ==
IF bPool.peerIds = AsPidSet({})
THEN 0 \* no peers, just return 0
ELSE LET Hts == {bPool.peerHeights[p] : p \in bPool.peerIds} IN
CHOOSE max \in Hts: \A h \in Hts: h <= max
(* Returns next height for which request should be sent.
Returns NilHeight in case there is no height for which request can be sent.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L454 *)
FindNextRequestHeight(bPool) ==
LET S == {i \in Heights:
/\ i >= bPool.height
/\ i <= MaxPeerHeight(bPool)
/\ bPool.blockStore[i] = NilBlock
/\ bPool.pendingBlocks[i] = NilPeer} IN
IF S = AsIntSet({})
THEN NilHeight
ELSE
CHOOSE min \in S: \A h \in S: h >= min
\* Returns number of pending requests for a given peer.
NumOfPendingRequests(bPool, peer) ==
LET peerPendingRequests ==
{h \in Heights:
/\ h >= bPool.height
/\ bPool.pendingBlocks[h] = peer
}
IN
Cardinality(peerPendingRequests)
(* Returns peer that can serve block for a given height.
Returns NilPeer in case there are no such peer.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L477 *)
FindPeerToServe(bPool, h) ==
LET peersThatCanServe == { p \in bPool.peerIds:
/\ bPool.peerHeights[p] >= h
/\ NumOfPendingRequests(bPool, p) < PEER_MAX_REQUESTS } IN
LET pendingBlocks ==
{i \in Heights:
/\ i >= bPool.height
/\ \/ bPool.pendingBlocks[i] /= NilPeer
\/ bPool.blockStore[i] /= NilBlock
} IN
IF \/ peersThatCanServe = AsPidSet({})
\/ Cardinality(pendingBlocks) >= TARGET_PENDING
THEN NilPeer
\* pick a peer that can serve request for height h that has minimum number of pending requests
ELSE CHOOSE p \in peersThatCanServe: \A q \in peersThatCanServe:
/\ NumOfPendingRequests(bPool, p) <= NumOfPendingRequests(bPool, q)
\* Make a request for a block (if possible) and return a request message and block poool.
CreateRequest(bPool) ==
LET nextHeight == FindNextRequestHeight(bPool) IN
IF nextHeight = NilHeight THEN [msg |-> AsOutMsg(NoMsg), pool |-> bPool]
ELSE
LET peer == FindPeerToServe(bPool, nextHeight) IN
IF peer = NilPeer THEN [msg |-> AsOutMsg(NoMsg), pool |-> bPool]
ELSE
LET m == [type |-> "blockRequest", peerId |-> peer, height |-> nextHeight] IN
LET newPool == [bPool EXCEPT
!.pendingBlocks = [bPool.pendingBlocks EXCEPT ![nextHeight] = peer]
] IN
[msg |-> m, pool |-> newPool]
\* Returns node state, i.e., defines termination condition.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L432
ComputeNextState(bPool) ==
IF bPool.syncedBlocks = 0 \* corresponds to the syncTimeout in case no progress has been made for a period of time.
THEN "finished"
ELSE IF /\ bPool.height > 1
/\ bPool.height >= MaxPeerHeight(bPool) \* see https://github.com/tendermint/tendermint/blob/61057a8b0af2beadee106e47c4616b279e83c920/blockchain/v2/scheduler.go#L566
THEN "finished"
ELSE "running"
(* Verify if commit is for the given block id and if commit has enough voting power.
See https://github.com/tendermint/tendermint/blob/61057a8b0af2beadee106e47c4616b279e83c920/blockchain/v2/processor_context.go#L12 *)
VerifyCommit(block, lastCommit) ==
PossibleCommit(block, lastCommit)
(* Tries to execute next block in the pool, i.e., defines block validation logic.
Returns new block pool (peers that has send invalid blocks are removed).
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/processor.go#L135 *)
ExecuteBlocks(bPool) ==
LET bStore == bPool.blockStore IN
LET block0 == bStore[bPool.height - 1] IN
\* blockPool is initialized with height = TrustedHeight + 1,
\* so bStore[bPool.height - 1] is well defined
LET block1 == bStore[bPool.height] IN
LET block2 == bStore[bPool.height + 1] IN
IF block1 = NilBlock \/ block2 = NilBlock
THEN bPool \* we don't have two next consecutive blocks
ELSE IF ~IsMatchingValidators(block1, block0.NextVS)
\* Check that block1.VS = block0.Next.
\* Otherwise, CorrectBlocksInv fails.
\* In the implementation NextVS is part of the application state,
\* so a mismatch can be found without access to block0.NextVS.
THEN \* the block does not have the expected validator set
RemovePeers({bPool.receivedBlocks[bPool.height]}, bPool)
ELSE IF ~VerifyCommit(block1, block2.lastCommit)
\* Verify commit of block2 based on block1.
\* Interestingly, we do not have to call IsMatchingValidators.
THEN \* remove the peers of block1 and block2, as they are considered faulty
RemovePeers({bPool.receivedBlocks[bPool.height],
bPool.receivedBlocks[bPool.height + 1]},
bPool)
ELSE \* all good, execute block at position height
[bPool EXCEPT !.height = bPool.height + 1]
\* Defines logic for pruning peers.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L613
TryPrunePeer(bPool, suspectedSet, isTimedOut) ==
(* -----------------------------------------------------------------------------------------------------------------------*)
(* Corresponds to function prunablePeers in scheduler.go file. Note that this function only checks if block has been *)
(* received from a peer during peerTimeout period. *)
(* Note that in case no request has been scheduled to a correct peer, or a request has been scheduled *)
(* recently, so the peer hasn't responded yet, a peer will be removed as no block is received within peerTimeout. *)
(* In case of faulty peers, we don't have any guarantee that they will respond. *)
(* Therefore, we model this with nondeterministic behavior as it could lead to peer removal, for both correct and faulty. *)
(* See scheduler.go *)
(* https://github.com/tendermint/tendermint/blob/4298bbcc4e25be78e3c4f21979d6aa01aede6e87/blockchain/v2/scheduler.go#L335 *)
LET toRemovePeers == bPool.peerIds \intersect suspectedSet IN
(*
Corresponds to logic for pruning a peer that is responsible for delivering block for the next height.
The pruning logic for the next height is based on the time when a BlockRequest is sent. Therefore, if a request is sent
to a correct peer for the next height (blockPool.height), it should never be removed by this check as we assume that
correct peers respond timely and reliably. However, if a request is sent to a faulty peer then we
might get response on time or not, which is modelled with nondeterministic isTimedOut flag.
See scheduler.go
https://github.com/tendermint/tendermint/blob/4298bbcc4e25be78e3c4f21979d6aa01aede6e87/blockchain/v2/scheduler.go#L617
*)
LET nextHeightPeer == bPool.pendingBlocks[bPool.height] IN
LET prunablePeers ==
IF /\ nextHeightPeer /= NilPeer
/\ nextHeightPeer \in FAULTY
/\ isTimedOut
THEN toRemovePeers \union {nextHeightPeer}
ELSE toRemovePeers
IN
RemovePeers(prunablePeers, bPool)
\* Handle SyncTimeout. It models if progress has been made (height has increased) since the last SyncTimeout event.
HandleSyncTimeout(bPool) ==
[bPool EXCEPT
!.syncedBlocks = bPool.height - bPool.syncHeight,
!.syncHeight = bPool.height
]
HandleResponse(msg, bPool) ==
IF msg.type = "blockResponse" THEN
HandleBlockResponse(msg, bPool)
ELSE IF msg.type = "noBlockResponse" THEN
HandleNoBlockResponse(msg, bPool)
ELSE IF msg.type = "statusResponse" THEN
HandleStatusResponse(msg, bPool)
ELSE IF msg.type = "addPeer" THEN
AddPeer(msg.peerId, bPool)
ELSE IF msg.type = "removePeer" THEN
RemovePeers({msg.peerId}, bPool)
ELSE IF msg.type = "syncTimeout" THEN
HandleSyncTimeout(bPool)
ELSE
bPool
(*
At every node step we executed the following steps (atomically):
1) input message is consumed and the corresponding handler is called,
2) pruning logic is called
3) block execution is triggered (we try to execute block at next height)
4) a request to a peer is made (if possible) and
5) we decide if termination condition is satisifed so we stop.
*)
NodeStep ==
\E suspectedSet \in SUBSET AllPeerIds: \* suspectedSet is a nondeterministic set of peers
\E isTimedOut \in BOOLEAN:
LET bPool == HandleResponse(inMsg, blockPool) IN
LET bp == TryPrunePeer(bPool, suspectedSet, isTimedOut) IN
LET nbPool == ExecuteBlocks(bp) IN
LET msgAndPool == CreateRequest(nbPool) IN
LET nstate == ComputeNextState(msgAndPool.pool) IN
/\ state' = nstate
/\ blockPool' = msgAndPool.pool
/\ outMsg' = msgAndPool.msg
/\ inMsg' = AsInMsg(NoMsg)
\* If node is running, then in every step we try to create blockRequest.
\* In addition, input message (if exists) is consumed and processed.
NextNode ==
\/ /\ state = "running"
/\ NodeStep
\/ /\ state = "finished"
/\ UNCHANGED <<nvars, inMsg, outMsg>>
(********************************** Peers ***********************************)
InitPeers ==
\E pHeights \in [AllPeerIds -> Heights]:
peersState = [
peerHeights |-> pHeights,
statusRequested |-> FALSE,
blocksRequested |-> AsOutMsgSet({})
]
HandleStatusRequest(msg, pState) ==
[pState EXCEPT
!.statusRequested = TRUE
]
HandleBlockRequest(msg, pState) ==
[pState EXCEPT
!.blocksRequested = pState.blocksRequested \union AsOutMsgSet({msg})
]
HandleRequest(msg, pState) ==
IF msg = AsOutMsg(NoMsg)
THEN pState
ELSE IF msg.type = "statusRequest"
THEN HandleStatusRequest(msg, pState)
ELSE HandleBlockRequest(msg, pState)
CreateStatusResponse(peer, pState, anyHeight) ==
LET m ==
IF peer \in CORRECT
THEN AsInMsg([type |-> "statusResponse", peerId |-> peer, height |-> pState.peerHeights[peer]])
ELSE AsInMsg([type |-> "statusResponse", peerId |-> peer, height |-> anyHeight]) IN
[msg |-> m, peers |-> pState]
CreateBlockResponse(msg, pState, arbitraryBlock) ==
LET m ==
IF msg.peerId \in CORRECT
THEN AsInMsg([type |-> "blockResponse", peerId |-> msg.peerId, block |-> CorrectBlock(msg.height)])
ELSE AsInMsg([type |-> "blockResponse", peerId |-> msg.peerId, block |-> arbitraryBlock]) IN
LET npState ==
[pState EXCEPT
!.blocksRequested = pState.blocksRequested \ {msg}
] IN
[msg |-> m, peers |-> npState]
GrowPeerHeight(pState) ==
\E p \in CORRECT:
/\ pState.peerHeights[p] < MAX_HEIGHT
/\ peersState' = [pState EXCEPT !.peerHeights[p] = @ + 1]
/\ inMsg' = AsInMsg(NoMsg)
SendStatusResponseMessage(pState) ==
/\ \E arbitraryHeight \in Heights:
\E peer \in AllPeerIds:
LET msgAndPeers == CreateStatusResponse(peer, pState, arbitraryHeight) IN
/\ peersState' = msgAndPeers.peers
/\ inMsg' = msgAndPeers.msg
SendAddPeerMessage ==
\E peer \in AllPeerIds:
inMsg' = AsInMsg([type |-> "addPeer", peerId |-> peer])
SendRemovePeerMessage ==
\E peer \in AllPeerIds:
inMsg' = AsInMsg([type |-> "removePeer", peerId |-> peer])
SendSyncTimeoutMessage ==
inMsg' = AsInMsg([type |-> "syncTimeout"])
SendControlMessage ==
\/ SendAddPeerMessage
\/ SendRemovePeerMessage
\/ SendSyncTimeoutMessage
\* An extremely important property of block hashes (blockId):
\* If a commit is correct, then the previous block in the block store must be also correct.
UnforgeableBlockId(height, block) ==
block.hashEqRef => block = chain[height]
\* A faulty peer cannot forge enough of the validators signatures.
\* A more precise rule should have checked that the commiters have over 2/3 of the VS's voting power.
NoFork(height, block) ==
(height > 1 /\ block.lastCommit.committers = chain[height - 1].VS)
=> block.lastCommit.blockIdEqRef
\* Can be block produced by a faulty peer
IsBlockByFaulty(height, block) ==
/\ block.height = height
/\ UnforgeableBlockId(height, block)
/\ NoFork(height, block)
SendBlockResponseMessage(pState) ==
\* a response to a requested block: either by a correct, or by a faulty peer
\/ /\ pState.blocksRequested /= AsOutMsgSet({})
/\ \E msg \in pState.blocksRequested:
\E block \in Blocks:
/\ IsBlockByFaulty(msg.height, block)
/\ LET msgAndPeers == CreateBlockResponse(msg, pState, block) IN
/\ peersState' = msgAndPeers.peers
/\ inMsg' = msgAndPeers.msg
\* a faulty peer can always send an unsolicited block
\/ \E peerId \in FAULTY:
\E block \in Blocks:
/\ IsBlockByFaulty(block.height, block)
/\ peersState' = pState
/\ inMsg' = AsInMsg([type |-> "blockResponse",
peerId |-> peerId, block |-> block])
SendNoBlockResponseMessage(pState) ==
/\ peersState' = pState
/\ inMsg' \in AsInMsgSet([type: {"noBlockResponse"}, peerId: FAULTY, height: Heights])
SendResponseMessage(pState) ==
\/ SendBlockResponseMessage(pState)
\/ SendNoBlockResponseMessage(pState)
\/ SendStatusResponseMessage(pState)
NextEnvStep(pState) ==
\/ SendResponseMessage(pState)
\/ GrowPeerHeight(pState)
\/ SendControlMessage /\ peersState' = pState
\* note that we propagate pState that was missing in the previous version
\* Peers consume a message and update it's local state. It then makes a single step, i.e., it sends at most single message.
\* Message sent could be either a response to a request or faulty message (sent by faulty processes).
NextPeers ==
LET pState == HandleRequest(outMsg, peersState) IN
/\ outMsg' = AsOutMsg(NoMsg)
/\ NextEnvStep(pState)
\* the composition of the node, the peers, the network and scheduler
Init ==
/\ IsCorrectChain(chain) \* initialize the blockchain
/\ InitNode
/\ InitPeers
/\ turn = "Peers"
/\ inMsg = AsInMsg(NoMsg)
/\ outMsg = AsOutMsg([type |-> "statusRequest"])
Next ==
IF turn = "Peers"
THEN
/\ NextPeers
/\ turn' = "Node"
/\ UNCHANGED <<nvars, chain>>
ELSE
/\ NextNode
/\ turn' = "Peers"
/\ UNCHANGED <<peersState, chain>>
FlipTurn ==
turn' =
IF turn = "Peers" THEN
"Node"
ELSE
"Peers"
\* Compute max peer height. Used as a helper operator in properties.
MaxCorrectPeerHeight(bPool) ==
LET correctPeers == {p \in bPool.peerIds: p \in CORRECT} IN
IF correctPeers = AsPidSet({})
THEN 0 \* no peers, just return 0
ELSE LET Hts == {bPool.peerHeights[p] : p \in correctPeers} IN
CHOOSE max \in Hts: \A h \in Hts: h <= max
\* properties to check
TypeOK ==
/\ state \in States
/\ inMsg \in InMsgs
/\ outMsg \in OutMsgs
/\ turn \in {"Peers", "Node"}
/\ peersState \in [
peerHeights: [AllPeerIds -> Heights \union {NilHeight}],
statusRequested: BOOLEAN,
blocksRequested:
SUBSET
[type: {"blockRequest"}, peerId: AllPeerIds, height: Heights]
]
/\ blockPool \in [
height: Heights,
peerIds: SUBSET AllPeerIds,
peerHeights: [AllPeerIds -> Heights \union {NilHeight}],
blockStore: [Heights -> Blocks \union {NilBlock}],
receivedBlocks: [Heights -> AllPeerIds \union {NilPeer}],
pendingBlocks: [Heights -> AllPeerIds \union {NilPeer}],
syncedBlocks: Heights \union {NilHeight, -1},
syncHeight: Heights
]
(* Incorrect synchronization: The last block may be never received *)
Sync1 ==
[](state = "finished" =>
blockPool.height >= MaxCorrectPeerHeight(blockPool))
Sync1AsInv ==
state = "finished" => blockPool.height >= MaxCorrectPeerHeight(blockPool)
(* Incorrect synchronization, as there may be a timeout *)
Sync2 ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ [] (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
Sync2AsInv ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
(* Correct synchronization *)
Sync3 ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ blockPool.syncedBlocks <= 0 \* timeout
\/ [] (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
Sync3AsInv ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ blockPool.syncedBlocks <= 0 \* timeout
\/ (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
(* Naive termination *)
\* This property is violated, as the faulty peers may produce infinitely many responses
Termination ==
WF_turn(FlipTurn) => <>(state = "finished")
(* Termination by timeout: the protocol terminates, if there is a timeout *)
\* the precondition: fair flip turn and eventual timeout when no new blocks were synchronized
TerminationByTOPre ==
/\ WF_turn(FlipTurn)
/\ <>(inMsg.type = "syncTimeout" /\ blockPool.height <= blockPool.syncHeight)
TerminationByTO ==
TerminationByTOPre => <>(state = "finished")
(* The termination property when we only have correct peers *)
\* as correct peers may spam the node with addPeer, removePeer, and statusResponse,
\* we have to enforce eventual response (there are no queues in our spec)
CorrBlockResponse ==
\A h \in Heights:
[](outMsg.type = "blockRequest" /\ outMsg.height = h
=> <>(inMsg.type = "blockResponse" /\ inMsg.block.height = h))
\* a precondition for termination in presence of only correct processes
TerminationCorrPre ==
/\ FAULTY = AsPidSet({})
/\ WF_turn(FlipTurn)
/\ CorrBlockResponse
\* termination when there are only correct processes
TerminationCorr ==
TerminationCorrPre => <>(state = "finished")
\* All synchronized blocks (but the last one) are exactly like in the reference chain
CorrectBlocksInv ==
\/ state /= "finished"
\/ \A h \in 1..(blockPool.height - 1):
blockPool.blockStore[h] = chain[h]
\* A false expectation that the protocol only finishes with the blocks
\* from the processes that had not been suspected in being faulty
SyncFromCorrectInv ==
\/ state /= "finished"
\/ \A h \in 1..blockPool.height:
blockPool.receivedBlocks[h] \in blockPool.peerIds \union {NilPeer}
\* A false expectation that a correct process is never removed from the set of peer ids.
\* A correct process may reply too late and then gets evicted.
CorrectNeverSuspectedInv ==
CORRECT \subseteq blockPool.peerIds
BlockPoolInvariant ==
\A h \in Heights:
\* waiting for a block to arrive
\/ /\ blockPool.receivedBlocks[h] = NilPeer
/\ blockPool.blockStore[h] = NilBlock
\* valid block is received and is present in the store
\/ /\ blockPool.receivedBlocks[h] /= NilPeer
/\ blockPool.blockStore[h] /= NilBlock
/\ blockPool.pendingBlocks[h] = NilPeer
(* a few simple properties that trigger counterexamples *)
\* Shows execution in which peer set is empty
PeerSetIsNeverEmpty == blockPool.peerIds /= AsPidSet({})
\* Shows execution in which state = "finished" and MaxPeerHeight is not equal to 1
StateNotFinished ==
state /= "finished" \/ MaxPeerHeight(blockPool) = 1
=============================================================================
\*=============================================================================
\* Modification History
\* Last modified Fri May 29 20:41:53 CEST 2020 by igor
\* Last modified Thu Apr 16 16:57:22 CEST 2020 by zarkomilosevic
\* Created Tue Feb 04 10:36:18 CET 2020 by zarkomilosevic

+ 3
- 0
spec/consensus/bft-time.md View File

@ -1,3 +1,6 @@
---
order: 2
---
# BFT Time # BFT Time
Tendermint provides a deterministic, Byzantine fault-tolerant, source of time. Tendermint provides a deterministic, Byzantine fault-tolerant, source of time.


+ 3
- 0
spec/consensus/consensus.md View File

@ -1,3 +1,6 @@
---
order: 1
---
# Byzantine Consensus Algorithm # Byzantine Consensus Algorithm
## Terms ## Terms


+ 3
- 0
spec/consensus/creating-proposal.md View File

@ -1,3 +1,6 @@
---
order: 2
---
# Creating a proposal # Creating a proposal
A block consists of a header, transactions, votes (the commit), A block consists of a header, transactions, votes (the commit),


+ 6
- 0
spec/consensus/light-client/README.md View File

@ -1,3 +1,9 @@
---
order: 1
parent:
title: Light Client
order: false
---
# Tendermint Light Client Protocol # Tendermint Light Client Protocol
Deprecated, please see [light-client](../../light-client/README.md). Deprecated, please see [light-client](../../light-client/README.md).

spec/reactors/consensus/proposer-selection.md → spec/consensus/proposer-selection.md View File


+ 40
- 4
spec/core/data_structures.md View File

@ -17,7 +17,7 @@ The Tendermint blockchains consists of a short list of data types:
- [`Vote`](#vote) - [`Vote`](#vote)
- [`CanonicalVote`](#canonicalvote) - [`CanonicalVote`](#canonicalvote)
- [`SignedMsgType`](#signedmsgtype) - [`SignedMsgType`](#signedmsgtype)
- [`EvidenceData`](#evidence_data)
- [`EvidenceList`](#evidence_list)
- [`Evidence`](#evidence) - [`Evidence`](#evidence)
- [`DuplicateVoteEvidence`](#duplicatevoteevidence) - [`DuplicateVoteEvidence`](#duplicatevoteevidence)
- [`LightClientAttackEvidence`](#lightclientattackevidence) - [`LightClientAttackEvidence`](#lightclientattackevidence)
@ -40,7 +40,7 @@ and a list of evidence of malfeasance (ie. signing conflicting votes).
|--------|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------| |--------|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------|
| Header | [Header](#header) | Header corresponding to the block. This field contains information used throughout consensus and other areas of the protocol. To find out what it contains, visit [header] (#header) | Must adhere to the validation rules of [header](#header) | | Header | [Header](#header) | Header corresponding to the block. This field contains information used throughout consensus and other areas of the protocol. To find out what it contains, visit [header] (#header) | Must adhere to the validation rules of [header](#header) |
| Data | [Data](#data) | Data contains a list of transactions. The contents of the transaction is unknown to Tendermint. | This field can be empty or populated, but no validation is performed. Applications can perform validation on individual transactions prior to block creation using [checkTx](../abci/abci.md#checktx). | Data | [Data](#data) | Data contains a list of transactions. The contents of the transaction is unknown to Tendermint. | This field can be empty or populated, but no validation is performed. Applications can perform validation on individual transactions prior to block creation using [checkTx](../abci/abci.md#checktx).
| Evidence | [EvidenceData](#evidence_data) | Evidence contains a list of infractions committed by validators. | Can be empty, but when populated the validations rules from [evidenceData](#evidence_data) apply |
| Evidence | [EvidenceList](#evidence_list) | Evidence contains a list of infractions committed by validators. | Can be empty, but when populated the validations rules from [evidenceList](#evidence_list) apply |
| LastCommit | [Commit](#commit) | `LastCommit` includes one vote for every validator. All votes must either be for the previous block, nil or absent. If a vote is for the previous block it must have a valid signature from the corresponding validator. The sum of the voting power of the validators that voted must be greater than 2/3 of the total voting power of the complete validator set. The number of votes in a commit is limited to 10000 (see `types.MaxVotesCount`). | Must be empty for the initial height and must adhere to the validation rules of [commit](#commit). | | LastCommit | [Commit](#commit) | `LastCommit` includes one vote for every validator. All votes must either be for the previous block, nil or absent. If a vote is for the previous block it must have a valid signature from the corresponding validator. The sum of the voting power of the validators that voted must be greater than 2/3 of the total voting power of the complete validator set. The number of votes in a commit is limited to 10000 (see `types.MaxVotesCount`). | Must be empty for the initial height and must adhere to the validation rules of [commit](#commit). |
## Execution ## Execution
@ -150,6 +150,16 @@ See [MerkleRoot](./encoding.md#MerkleRoot) for details.
| Total | int32 | Total amount of parts for a block | Must be > 0 | | Total | int32 | Total amount of parts for a block | Must be > 0 |
| Hash | slice of bytes (`[]byte`) | MerkleRoot of a serialized block | Must be of length 32 | | Hash | slice of bytes (`[]byte`) | MerkleRoot of a serialized block | Must be of length 32 |
## Part
Part defines a part of a block. In Tendermint blocks are broken into `parts` for gossip.
| Name | Type | Description | Validation |
|-------|-----------------|-----------------------------------|----------------------|
| index | int32 | Total amount of parts for a block | Must be > 0 |
| bytes | bytes | MerkleRoot of a serialized block | Must be of length 32 |
| proof | [Proof](#proof) | MerkleRoot of a serialized block | Must be of length 32 |
## Time ## Time
Tendermint uses the [Google.Protobuf.WellKnownTypes.Timestamp](https://developers.google.com/protocol-buffers/docs/reference/csharp/class/google/protobuf/well-known-types/timestamp) Tendermint uses the [Google.Protobuf.WellKnownTypes.Timestamp](https://developers.google.com/protocol-buffers/docs/reference/csharp/class/google/protobuf/well-known-types/timestamp)
@ -255,6 +265,23 @@ func (vote *Vote) Verify(chainID string, pubKey crypto.PubKey) error {
} }
``` ```
### Proposal
Proposal contains height and round for which this proposal is made, BlockID as a unique identifier
of proposed block, timestamp, and POLRound (a so-called Proof-of-Lock (POL) round) that is needed for
termination of the consensus. If POLRound >= 0, then BlockID corresponds to the block that
is locked in POLRound. The message is signed by the validator private key.
| Name | Type | Description | Validation |
|-----------|---------------------------------|---------------------------------------------------------------------------------------|---------------------------------------------------------|
| Type | [SignedMsgType](#signedmsgtype) | Represents a Proposal [SignedMsgType](#signedmsgtype) | Must be `ProposalType` [signedMsgType](#signedmsgtype) |
| Height | int64 | Height for which this vote was created for | Must be > 0 |
| Round | int32 | Round that the commit corresponds to. | Must be > 0 |
| POLRound | int64 | Proof of lock | Must be > 0 |
| BlockID | [BlockID](#blockid) | The blockID of the corresponding block. | [BlockID](#blockid) |
| Timestamp | [Time](#Time) | Timestamp represents the time at which a validator signed. | [Time](#time) |
| Signature | slice of bytes (`[]byte`) | Signature by the validator if they participated in consensus for the associated bock. | Length of signature must be > 0 and < 64 |
## SignedMsgType ## SignedMsgType
Signed message type represents a signed messages in consensus. Signed message type represents a signed messages in consensus.
@ -278,9 +305,9 @@ Signatures in Tendermint are raw bytes representing the underlying signature.
See the [signature spec](./encoding.md#key-types) for more. See the [signature spec](./encoding.md#key-types) for more.
## EvidenceData
## EvidenceList
EvidenceData is a simple wrapper for a list of evidence:
EvidenceList is a simple wrapper for a list of evidence:
| Name | Type | Description | Validation | | Name | Type | Description | Validation |
|----------|--------------------------------|----------------------------------------|-----------------------------------------------------------------| |----------|--------------------------------|----------------------------------------|-----------------------------------------------------------------|
@ -439,3 +466,12 @@ func SumTruncated(bz []byte) []byte {
| Name | Type | Description | Field Number | | Name | Type | Description | Field Number |
|-------------|--------|-------------------------------|--------------| |-------------|--------|-------------------------------|--------------|
| app_version | uint64 | The ABCI application version. | 1 | | app_version | uint64 | The ABCI application version. | 1 |
### Proof
| Name | Type | Description | Field Number |
|-----------|----------------|-----------------------------------------------|--------------|
| total | int64 | Total number of items. | 1 |
| index | int64 | Index item to prove. | 2 |
| leaf_hash | bytes | Hash of item value. | 3 |
| aunts | repeated bytes | Hashes from leaf's sibling to a root's child. | 4 |

+ 19
- 0
spec/p2p/messages/README.md View File

@ -0,0 +1,19 @@
---
order: 1
parent:
title: Messages
order: 1
---
# Messages
An implementation of the spec consists of many components. While many parts of these components are implementation specific, the p2p messages are not. In this section we will be covering all the p2p messages of components.
There are two parts to the P2P messages, the message and the channel. The channel is message specific and messages are specific to components of Tendermint. When a node connect to a peer it will tell the other node which channels are available. This notifies the peer what services the connecting node offers. You can read more on channels in [connection.md](../connection.md#mconnection)
- [Block Sync](./block-sync.md)
- [Mempool](./mempool.md)
- [Evidence](./evidence.md)
- [State Sync](./state-sync.md)
- [Pex](./pex.md)
- [Consensus](./consensus.md)

+ 68
- 0
spec/p2p/messages/block-sync.md View File

@ -0,0 +1,68 @@
---
order: 2
---
# Block Sync
## Channel
Block sync has one channel.
| Name | Number |
|-------------------|--------|
| BlockchainChannel | 64 |
## Message Types
There are multiple message types for Block Sync
### BlockRequest
BlockRequest asks a peer for a block at the height specified.
| Name | Type | Description | Field Number |
|--------|-------|---------------------------|--------------|
| Height | int64 | Height of requested block | 1 |
### NoBlockResponse
NoBlockResponse notifies the peer requesting a block that the node does not contain it.
| Name | Type | Description | Field Number |
|--------|-------|---------------------------|--------------|
| Height | int64 | Height of requested block | 1 |
### BlockResponse
BlockResponse contains the block requested.
| Name | Type | Description | Field Number |
|-------|----------------------------------------------|-----------------|--------------|
| Block | [Block](../../core/data_structures.md#block) | Requested Block | 1 |
### StatusRequest
StatusRequest is an empty message that notifies the peer to respond with the highest and lowest blocks it has stored.
> Empty message.
### StatusResponse
StatusResponse responds to a peer with the highest and lowest block stored.
| Name | Type | Description | Field Number |
|--------|-------|-------------------------------------------------------------------|--------------|
| Height | int64 | Current Height of a node | 1 |
| base | int64 | First known block, if pruning is enabled it will be higher than 1 | 1 |
### Message
Message is a [`oneof` protobuf type](https://developers.google.com/protocol-buffers/docs/proto#oneof). The `oneof` consists of five messages.
| Name | Type | Description | Field Number |
|-------------------|----------------------------------|--------------------------------------------------------------|--------------|
| block_request | [BlockRequest](#blockrequest) | Request a block from a peer | 1 |
| no_block_response | [BlockRequest](#noblockresponse) | Response saying it doe snot have the requested block | 2 |
| block_response | [BlockRequest](#blockresponse) | Response with requested block | 3 |
| status_request | [BlockRequest](#statusrequest) | Request the highest and lowest block numbers from a peer | 4 |
| status_response | [BlockRequest](#statusresponse) | Response with the highest and lowest block numbers the store | 5 |

+ 149
- 0
spec/p2p/messages/consensus.md View File

@ -0,0 +1,149 @@
---
order: 7
---
# Consensus
## Channel
Consensus has four separate channels. The channel identifiers are listed below.
| Name | Number |
|--------------------|--------|
| StateChannel | 32 |
| DataChannel | 33 |
| VoteChannel | 34 |
| VoteSetBitsChannel | 35 |
## Message Types
### Proposal
Proposal is sent when a new block is proposed. It is a suggestion of what the
next block in the blockchain should be.
| Name | Type | Description | Field Number |
|----------|----------------------------------------------------|----------------------------------------|--------------|
| proposal | [Proposal](../../core/data_structures.md#proposal) | Proposed Block to come to consensus on | 1 |
### Vote
Vote is sent to vote for some block (or to inform others that a process does not vote in the
current round). Vote is defined in the
[Blockchain](https://github.com/tendermint/spec/blob/master/spec/core/data_structures.md#blockidd)
section and contains validator's
information (validator address and index), height and round for which the vote is sent, vote type,
blockID if process vote for some block (`nil` otherwise) and a timestamp when the vote is sent. The
message is signed by the validator private key.
| Name | Type | Description | Field Number |
|------|--------------------------------------------|---------------------------|--------------|
| vote | [Vote](../../core/data_structures.md#vote) | Vote for a proposed Block | 1 |
### BlockPart
BlockPart is sent when gossiping a piece of the proposed block. It contains height, round
and the block part.
| Name | Type | Description | Field Number |
|--------|--------------------------------------------|----------------------------------------|--------------|
| height | int64 | Height of corresponding block. | 1 |
| round | int32 | Round of voting to finalize the block. | 2 |
| part | [Part](../../core/data_structures.md#part) | A part of the block. | 3 |
### NewRoundStep
NewRoundStep is sent for every step transition during the core consensus algorithm execution.
It is used in the gossip part of the Tendermint protocol to inform peers about a current
height/round/step a process is in.
| Name | Type | Description | Field Number |
|--------------------------|--------|----------------------------------------|--------------|
| height | int64 | Height of corresponding block | 1 |
| round | int32 | Round of voting to finalize the block. | 2 |
| step | uint32 | | 3 |
| seconds_since_start_time | int64 | | 4 |
| last_commit_round | int32 | | 5 |
### NewValidBlock
NewValidBlock is sent when a validator observes a valid block B in some round r,
i.e., there is a Proposal for block B and 2/3+ prevotes for the block B in the round r.
It contains height and round in which valid block is observed, block parts header that describes
the valid block and is used to obtain all
block parts, and a bit array of the block parts a process currently has, so its peers can know what
parts it is missing so they can send them.
In case the block is also committed, then IsCommit flag is set to true.
| Name | Type | Description | Field Number |
|-----------------------|--------------------------------------------------------------|----------------------------------------|--------------|
| height | int64 | Height of corresponding block | 1 |
| round | int32 | Round of voting to finalize the block. | 2 |
| block_part_set_header | [PartSetHeader](../../core/data_structures.md#partsetheader) | | 3 |
| block_parts | int32 | | 4 |
| is_commit | bool | | 5 |
### ProposalPOL
ProposalPOL is sent when a previous block is re-proposed.
It is used to inform peers in what round the process learned for this block (ProposalPOLRound),
and what prevotes for the re-proposed block the process has.
| Name | Type | Description | Field Number |
|--------------------|----------|-------------------------------|--------------|
| height | int64 | Height of corresponding block | 1 |
| proposal_pol_round | int32 | | 2 |
| proposal_pol | bitarray | | 3 |
### HasVote
HasVote is sent to indicate that a particular vote has been received. It contains height,
round, vote type and the index of the validator that is the originator of the corresponding vote.
| Name | Type | Description | Field Number |
|--------|------------------------------------------------------------------|----------------------------------------|--------------|
| height | int64 | Height of corresponding block | 1 |
| round | int32 | Round of voting to finalize the block. | 2 |
| type | [SignedMessageType](../../core/data_structures.md#signedmsgtype) | | 3 |
| index | int32 | | 4 |
### VoteSetMaj23
VoteSetMaj23 is sent to indicate that a process has seen +2/3 votes for some BlockID.
It contains height, round, vote type and the BlockID.
| Name | Type | Description | Field Number |
|--------|------------------------------------------------------------------|----------------------------------------|--------------|
| height | int64 | Height of corresponding block | 1 |
| round | int32 | Round of voting to finalize the block. | 2 |
| type | [SignedMessageType](../../core/data_structures.md#signedmsgtype) | | 3 |
### VoteSetBits
VoteSetBits is sent to communicate the bit-array of votes a process has seen for a given
BlockID. It contains height, round, vote type, BlockID and a bit array of
the votes a process has.
| Name | Type | Description | Field Number |
|----------|------------------------------------------------------------------|----------------------------------------|--------------|
| height | int64 | Height of corresponding block | 1 |
| round | int32 | Round of voting to finalize the block. | 2 |
| type | [SignedMessageType](../../core/data_structures.md#signedmsgtype) | | 3 |
| block_id | [BlockID](../../core/data_structures.md#blockid) | | 4 |
| votes | BitArray | Round of voting to finalize the block. | 5 |
### Message
Message is a [`oneof` protobuf type](https://developers.google.com/protocol-buffers/docs/proto#oneof).
| Name | Type | Description | Field Number |
|-----------------|---------------------------------|----------------------------------------|--------------|
| new_round_step | [NewRoundStep](#newroundstep) | Height of corresponding block | 1 |
| new_valid_block | [NewValidBlock](#newvalidblock) | Round of voting to finalize the block. | 2 |
| proposal | [Proposal](#proposal) | | 3 |
| proposal_pol | [ProposalPOL](#proposalpol) | | 4 |
| block_part | [BlockPart](#blockpart) | | 5 |
| vote | [Vote](#vote) | | 6 |
| has_vote | [HasVote](#hasvote) | | 7 |
| vote_set_maj23 | [VoteSetMaj23](#votesetmaj23) | | 8 |
| vote_set_bits | [VoteSetBits](#votesetbits) | | 9 |

+ 23
- 0
spec/p2p/messages/evidence.md View File

@ -0,0 +1,23 @@
---
order: 3
---
# Evidence
## Channel
Evidence has one channel. The channel identifier is listed below.
| Name | Number |
|-----------------|--------|
| EvidenceChannel | 56 |
## Message Types
### EvidenceList
EvidenceList consists of a list of verified evidence. This evidence will already have been propagated throughout the network. EvidenceList is used in two places, as a p2p message and within the block [block](../../core/data_structures.md#block) as well.
| Name | Type | Description | Field Number |
|----------|-------------------------------------------------------------|------------------------|--------------|
| evidence | repeated [Evidence](../../core/data_structures.md#evidence) | List of valid evidence | 1 |

+ 33
- 0
spec/p2p/messages/mempool.md View File

@ -0,0 +1,33 @@
---
order: 4
---
# Mempool
## Channel
Mempool has one channel. The channel identifier is listed below.
| Name | Number |
|----------------|--------|
| MempoolChannel | 48 |
## Message Types
There is currently only one message that Mempool broadcasts and receives over
the p2p gossip network (via the reactor): `TxsMessage`
### Txs
A list of transactions. These transactions have been checked against the application for validity. This does not mean that the transactions are valid, it is up to the application to check this.
| Name | Type | Description | Field Number |
|------|----------------|----------------------|--------------|
| txs | repeated bytes | List of transactions | 1 |
### Message
Message is a [`oneof` protobuf type](https://developers.google.com/protocol-buffers/docs/proto#oneof). The one of consists of one message [`Txs`](#txs).
| Name | Type | Description | Field Number |
|------|-------------|-----------------------|--------------|
| txs | [Txs](#txs) | List of transactions | 1 |

+ 48
- 0
spec/p2p/messages/pex.md View File

@ -0,0 +1,48 @@
---
order: 6
---
# Peer Exchange
## Channels
Pex has one channel. The channel identifier is listed below.
| Name | Number |
|------------|--------|
| PexChannel | 0 |
## Message Types
### PexRequest
PexRequest is an empty message requesting a list of peers.
> EmptyRequest
### PexAddrs
PexAddrs is an list of net addresses provided to a peer to dial.
| Name | Type | Description | Field Number |
|-------|------------------------------------|------------------------------------------|--------------|
| addrs | repeated [NetAddress](#netaddress) | List of peer addresses available to dial | 1 |
### NetAddress
NetAddress provides needed information for a node to dial a peer.
| Name | Type | Description | Field Number |
|------|--------|------------------|--------------|
| id | string | NodeID of a peer | 1 |
| ip | string | The IP of a node | 2 |
| port | port | Port of a peer | 3 |
### Message
Message is a [`oneof` protobuf type](https://developers.google.com/protocol-buffers/docs/proto#oneof). The one of consists of two messages.
| Name | Type | Description | Field Number |
|-------------|---------------------------|------------------------------------------------------|--------------|
| pex_request | [PexRequest](#pexrequest) | Empty request asking for a list of addresses to dial | 1 |
| pex_addrs | [PexAddrs] | List of addresses to dial | 2 |

+ 85
- 0
spec/p2p/messages/state-sync.md View File

@ -0,0 +1,85 @@
---
order: 5
---
# State Sync
## Channels
State sync has two distinct channels. The channel identifiers are listed below.
| Name | Number |
|-----------------|--------|
| SnapshotChannel | 96 |
| ChunkChannel | 97 |
## Message Types
### SnapshotRequest
When a new node begin state syncing, it will ask all peers it encounters if it has any
available snapshots:
| Name | Type | Description | Field Number |
|----------|--------|-------------|--------------|
### SnapShotResponse
The receiver will query the local ABCI application via `ListSnapshots`, and send a message
containing snapshot metadata (limited to 4 MB) for each of the 10 most recent snapshots: and stored at the application layer. When a peer is starting it will request snapshots.
| Name | Type | Description | Field Number |
|----------|--------|-----------------------------------------------------------|--------------|
| height | uint64 | Height at which the snapshot was taken | 1 |
| format | uint32 | Format of the snapshot. | 2 |
| chunks | uint32 | How many chunks make up the snapshot | 3 |
| hash | bytes | Arbitrary snapshot hash | 4 |
| metadata | bytes | Arbitrary application data. **May be non-deterministic.** | 5 |
### ChunkRequest
The node running state sync will offer these snapshots to the local ABCI application via
`OfferSnapshot` ABCI calls, and keep track of which peers contain which snapshots. Once a snapshot
is accepted, the state syncer will request snapshot chunks from appropriate peers:
| Name | Type | Description | Field Number |
|--------|--------|-------------------------------------------------------------|--------------|
| height | uint64 | Height at which the chunk was created | 1 |
| format | uint32 | Format chosen for the chunk. **May be non-deterministic.** | 2 |
| index | uint32 | Index of the chunk within the snapshot. | 3 |
### ChunkResponse
The receiver will load the requested chunk from its local application via `LoadSnapshotChunk`,
and respond with it (limited to 16 MB):
| Name | Type | Description | Field Number |
|---------|--------|-------------------------------------------------------------|--------------|
| height | uint64 | Height at which the chunk was created | 1 |
| format | uint32 | Format chosen for the chunk. **May be non-deterministic.** | 2 |
| index | uint32 | Index of the chunk within the snapshot. | 3 |
| hash | bytes | Arbitrary snapshot hash | 4 |
| missing | bool | Arbitrary application data. **May be non-deterministic.** | 5 |
Here, `Missing` is used to signify that the chunk was not found on the peer, since an empty
chunk is a valid (although unlikely) response.
The returned chunk is given to the ABCI application via `ApplySnapshotChunk` until the snapshot
is restored. If a chunk response is not returned within some time, it will be re-requested,
possibly from a different peer.
The ABCI application is able to request peer bans and chunk refetching as part of the ABCI protocol.
If no state sync is in progress (i.e. during normal operation), any unsolicited response messages
are discarded.
### Message
Message is a [`oneof` protobuf type](https://developers.google.com/protocol-buffers/docs/proto#oneof). The `oneof` consists of four messages.
| Name | Type | Description | Field Number |
|--------------------|---------------------------------------|----------------------------------------------|--------------|
| snapshots_request | [SnapshotRequest](#snapshotrequest) | Request a recent snapshot from a peer | 1 |
| snapshots_response | [SnapshotResponse](#snapshotresponse) | Respond with the most recent snapshot stored | 2 |
| chunk_request | [ChunkRequest](#chunkrequest) | Request chunks of the snapshot. | 3 |
| chunk_response | [ChunkRequest](#chunkresponse) | Response of chunks used to recreate state. | 4 |

BIN
spec/reactors/block_sync/img/bc-reactor-routines.png View File

Before After
Width: 1602  |  Height: 1132  |  Size: 265 KiB

BIN
spec/reactors/block_sync/img/bc-reactor.png View File

Before After
Width: 1520  |  Height: 1340  |  Size: 43 KiB

+ 0
- 43
spec/reactors/block_sync/impl.md View File

@ -1,43 +0,0 @@
# Blockchain Reactor v0 Module
## Blockchain Reactor
- coordinates the pool for syncing
- coordinates the store for persistence
- coordinates the playing of blocks towards the app using a sm.BlockExecutor
- handles switching between fastsync and consensus
- it is a p2p.BaseReactor
- starts the pool.Start() and its poolRoutine()
- registers all the concrete types and interfaces for serialisation
### poolRoutine
- listens to these channels:
- pool requests blocks from a specific peer by posting to requestsCh, block reactor then sends
a &bcBlockRequestMessage for a specific height
- pool signals timeout of a specific peer by posting to timeoutsCh
- switchToConsensusTicker to periodically try and switch to consensus
- trySyncTicker to periodically check if we have fallen behind and then catch-up sync
- if there aren't any new blocks available on the pool it skips syncing
- tries to sync the app by taking downloaded blocks from the pool, gives them to the app and stores
them on disk
- implements Receive which is called by the switch/peer
- calls AddBlock on the pool when it receives a new block from a peer
## Block Pool
- responsible for downloading blocks from peers
- makeRequestersRoutine()
- removes timeout peers
- starts new requesters by calling makeNextRequester()
- requestRoutine():
- picks a peer and sends the request, then blocks until:
- pool is stopped by listening to pool.Quit
- requester is stopped by listening to Quit
- request is redone
- we receive a block
- gotBlockCh is strange
## Go Routines in Blockchain Reactor
![Go Routines Diagram](img/bc-reactor-routines.png)

+ 0
- 308
spec/reactors/block_sync/reactor.md View File

@ -1,308 +0,0 @@
# Blockchain Reactor
The Blockchain Reactor's high level responsibility is to enable peers who are
far behind the current state of the consensus to quickly catch up by downloading
many blocks in parallel, verifying their commits, and executing them against the
ABCI application.
Tendermint full nodes run the Blockchain Reactor as a service to provide blocks
to new nodes. New nodes run the Blockchain Reactor in "fast_sync" mode,
where they actively make requests for more blocks until they sync up.
Once caught up, "fast_sync" mode is disabled and the node switches to
using (and turns on) the Consensus Reactor.
## Message Types
```go
const (
msgTypeBlockRequest = byte(0x10)
msgTypeBlockResponse = byte(0x11)
msgTypeNoBlockResponse = byte(0x12)
msgTypeStatusResponse = byte(0x20)
msgTypeStatusRequest = byte(0x21)
)
```
```go
type bcBlockRequestMessage struct {
Height int64
}
type bcNoBlockResponseMessage struct {
Height int64
}
type bcBlockResponseMessage struct {
Block Block
}
type bcStatusRequestMessage struct {
Height int64
type bcStatusResponseMessage struct {
Height int64
}
```
## Architecture and algorithm
The Blockchain reactor is organised as a set of concurrent tasks:
- Receive routine of Blockchain Reactor
- Task for creating Requesters
- Set of Requesters tasks and - Controller task.
![Blockchain Reactor Architecture Diagram](img/bc-reactor.png)
### Data structures
These are the core data structures necessarily to provide the Blockchain Reactor logic.
Requester data structure is used to track assignment of request for `block` at position `height` to a peer with id equals to `peerID`.
```go
type Requester {
mtx Mutex
block Block
height int64

 peerID p2p.ID
redoChannel chan p2p.ID //redo may send multi-time; peerId is used to identify repeat
}
```
Pool is a core data structure that stores last executed block (`height`), assignment of requests to peers (`requesters`), current height for each peer and number of pending requests for each peer (`peers`), maximum peer height, etc.
```go
type Pool {
mtx Mutex
requesters map[int64]*Requester
height int64
peers map[p2p.ID]*Peer
maxPeerHeight int64
numPending int32
store BlockStore
requestsChannel chan<- BlockRequest
errorsChannel chan<- peerError
}
```
Peer data structure stores for each peer current `height` and number of pending requests sent to the peer (`numPending`), etc.
```go
type Peer struct {
id p2p.ID
height int64
numPending int32
timeout *time.Timer
didTimeout bool
}
```
BlockRequest is internal data structure used to denote current mapping of request for a block at some `height` to a peer (`PeerID`).
```go
type BlockRequest {
Height int64
PeerID p2p.ID
}
```
### Receive routine of Blockchain Reactor
It is executed upon message reception on the BlockchainChannel inside p2p receive routine. There is a separate p2p receive routine (and therefore receive routine of the Blockchain Reactor) executed for each peer. Note that try to send will not block (returns immediately) if outgoing buffer is full.
```go
handleMsg(pool, m):
upon receiving bcBlockRequestMessage m from peer p:
block = load block for height m.Height from pool.store
if block != nil then
try to send BlockResponseMessage(block) to p
else
try to send bcNoBlockResponseMessage(m.Height) to p
upon receiving bcBlockResponseMessage m from peer p:
pool.mtx.Lock()
requester = pool.requesters[m.Height]
if requester == nil then
error("peer sent us a block we didn't expect")
continue
if requester.block == nil and requester.peerID == p then
requester.block = m
pool.numPending -= 1 // atomic decrement
peer = pool.peers[p]
if peer != nil then
peer.numPending--
if peer.numPending == 0 then
peer.timeout.Stop()
// NOTE: we don't send Quit signal to the corresponding requester task!
else
trigger peer timeout to expire after peerTimeout
pool.mtx.Unlock()
upon receiving bcStatusRequestMessage m from peer p:
try to send bcStatusResponseMessage(pool.store.Height)
upon receiving bcStatusResponseMessage m from peer p:
pool.mtx.Lock()
peer = pool.peers[p]
if peer != nil then
peer.height = m.height
else
peer = create new Peer data structure with id = p and height = m.Height
pool.peers[p] = peer
if m.Height > pool.maxPeerHeight then
pool.maxPeerHeight = m.Height
pool.mtx.Unlock()
onTimeout(p):
send error message to pool error channel
peer = pool.peers[p]
peer.didTimeout = true
```
### Requester tasks
Requester task is responsible for fetching a single block at position `height`.
```go
fetchBlock(height, pool):
while true do {
peerID = nil
block = nil
peer = pickAvailablePeer(height)
peerID = peer.id
enqueue BlockRequest(height, peerID) to pool.requestsChannel
redo = false
while !redo do
select {
upon receiving Quit message do
return
upon receiving redo message with id on redoChannel do
if peerID == id {
mtx.Lock()
pool.numPending++
redo = true
mtx.UnLock()
}
}
}
pickAvailablePeer(height):
selectedPeer = nil
while selectedPeer = nil do
pool.mtx.Lock()
for each peer in pool.peers do
if !peer.didTimeout and peer.numPending < maxPendingRequestsPerPeer and peer.height >= height then
peer.numPending++
selectedPeer = peer
break
pool.mtx.Unlock()
if selectedPeer = nil then
sleep requestIntervalMS
return selectedPeer
```
sleep for requestIntervalMS
### Task for creating Requesters
This task is responsible for continuously creating and starting Requester tasks.
```go
createRequesters(pool):
while true do
if !pool.isRunning then break
if pool.numPending < maxPendingRequests or size(pool.requesters) < maxTotalRequesters then
pool.mtx.Lock()
nextHeight = pool.height + size(pool.requesters)
requester = create new requester for height nextHeight
pool.requesters[nextHeight] = requester
pool.numPending += 1 // atomic increment
start requester task
pool.mtx.Unlock()
else
sleep requestIntervalMS
pool.mtx.Lock()
for each peer in pool.peers do
if !peer.didTimeout && peer.numPending > 0 && peer.curRate < minRecvRate then
send error on pool error channel
peer.didTimeout = true
if peer.didTimeout then
for each requester in pool.requesters do
if requester.getPeerID() == peer then
enqueue msg on requestor's redoChannel
delete(pool.peers, peerID)
pool.mtx.Unlock()
```
### Main blockchain reactor controller task
```go
main(pool):
create trySyncTicker with interval trySyncIntervalMS
create statusUpdateTicker with interval statusUpdateIntervalSeconds
create switchToConsensusTicker with interval switchToConsensusIntervalSeconds
while true do
select {
upon receiving BlockRequest(Height, Peer) on pool.requestsChannel:
try to send bcBlockRequestMessage(Height) to Peer
upon receiving error(peer) on errorsChannel:
stop peer for error
upon receiving message on statusUpdateTickerChannel:
broadcast bcStatusRequestMessage(bcR.store.Height) // message sent in a separate routine
upon receiving message on switchToConsensusTickerChannel:
pool.mtx.Lock()
receivedBlockOrTimedOut = pool.height > 0 || (time.Now() - pool.startTime) > 5 Seconds
ourChainIsLongestAmongPeers = pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
haveSomePeers = size of pool.peers > 0
pool.mtx.Unlock()
if haveSomePeers && receivedBlockOrTimedOut && ourChainIsLongestAmongPeers then
switch to consensus mode
upon receiving message on trySyncTickerChannel:
for i = 0; i < 10; i++ do
pool.mtx.Lock()
firstBlock = pool.requesters[pool.height].block
secondBlock = pool.requesters[pool.height].block
if firstBlock == nil or secondBlock == nil then continue
pool.mtx.Unlock()
verify firstBlock using LastCommit from secondBlock
if verification failed
pool.mtx.Lock()
peerID = pool.requesters[pool.height].peerID
redoRequestsForPeer(peerId)
delete(pool.peers, peerID)
stop peer peerID for error
pool.mtx.Unlock()
else
delete(pool.requesters, pool.height)
save firstBlock to store
pool.height++
execute firstBlock
}
redoRequestsForPeer(pool, peerId):
for each requester in pool.requesters do
if requester.getPeerID() == peerID
enqueue msg on redoChannel for requester
```
## Channels
Defines `maxMsgSize` for the maximum size of incoming messages,
`SendQueueCapacity` and `RecvBufferCapacity` for maximum sending and
receiving buffers respectively. These are supposed to prevent amplification
attacks by setting up the upper limit on how much data we can receive & send to
a peer.
Sending incorrectly encoded data will result in stopping the peer.

+ 0
- 366
spec/reactors/consensus/consensus-reactor.md View File

@ -1,366 +0,0 @@
# Consensus Reactor
Consensus Reactor defines a reactor for the consensus service. It contains the ConsensusState service that
manages the state of the Tendermint consensus internal state machine.
When Consensus Reactor is started, it starts Broadcast Routine which starts ConsensusState service.
Furthermore, for each peer that is added to the Consensus Reactor, it creates (and manages) the known peer state
(that is used extensively in gossip routines) and starts the following three routines for the peer p:
Gossip Data Routine, Gossip Votes Routine and QueryMaj23Routine. Finally, Consensus Reactor is responsible
for decoding messages received from a peer and for adequate processing of the message depending on its type and content.
The processing normally consists of updating the known peer state and for some messages
(`ProposalMessage`, `BlockPartMessage` and `VoteMessage`) also forwarding message to ConsensusState module
for further processing. In the following text we specify the core functionality of those separate unit of executions
that are part of the Consensus Reactor.
## ConsensusState service
Consensus State handles execution of the Tendermint BFT consensus algorithm. It processes votes and proposals,
and upon reaching agreement, commits blocks to the chain and executes them against the application.
The internal state machine receives input from peers, the internal validator and from a timer.
Inside Consensus State we have the following units of execution: Timeout Ticker and Receive Routine.
Timeout Ticker is a timer that schedules timeouts conditional on the height/round/step that are processed
by the Receive Routine.
### Receive Routine of the ConsensusState service
Receive Routine of the ConsensusState handles messages which may cause internal consensus state transitions.
It is the only routine that updates RoundState that contains internal consensus state.
Updates (state transitions) happen on timeouts, complete proposals, and 2/3 majorities.
It receives messages from peers, internal validators and from Timeout Ticker
and invokes the corresponding handlers, potentially updating the RoundState.
The details of the protocol (together with formal proofs of correctness) implemented by the Receive Routine are
discussed in separate document. For understanding of this document
it is sufficient to understand that the Receive Routine manages and updates RoundState data structure that is
then extensively used by the gossip routines to determine what information should be sent to peer processes.
## Round State
RoundState defines the internal consensus state. It contains height, round, round step, a current validator set,
a proposal and proposal block for the current round, locked round and block (if some block is being locked), set of
received votes and last commit and last validators set.
```go
type RoundState struct {
Height int64
Round int
Step RoundStepType
Validators ValidatorSet
Proposal Proposal
ProposalBlock Block
ProposalBlockParts PartSet
LockedRound int
LockedBlock Block
LockedBlockParts PartSet
Votes HeightVoteSet
LastCommit VoteSet
LastValidators ValidatorSet
}
```
Internally, consensus will run as a state machine with the following states:
- RoundStepNewHeight
- RoundStepNewRound
- RoundStepPropose
- RoundStepProposeWait
- RoundStepPrevote
- RoundStepPrevoteWait
- RoundStepPrecommit
- RoundStepPrecommitWait
- RoundStepCommit
## Peer Round State
Peer round state contains the known state of a peer. It is being updated by the Receive routine of
Consensus Reactor and by the gossip routines upon sending a message to the peer.
```golang
type PeerRoundState struct {
Height int64 // Height peer is at
Round int // Round peer is at, -1 if unknown.
Step RoundStepType // Step peer is at
Proposal bool // True if peer has proposal for this round
ProposalBlockPartsHeader PartSetHeader
ProposalBlockParts BitArray
ProposalPOLRound int // Proposal's POL round. -1 if none.
ProposalPOL BitArray // nil until ProposalPOLMessage received.
Prevotes BitArray // All votes peer has for this round
Precommits BitArray // All precommits peer has for this round
LastCommitRound int // Round of commit for last height. -1 if none.
LastCommit BitArray // All commit precommits of commit for last height.
CatchupCommitRound int // Round that we have commit for. Not necessarily unique. -1 if none.
CatchupCommit BitArray // All commit precommits peer has for this height & CatchupCommitRound
}
```
## Receive method of Consensus reactor
The entry point of the Consensus reactor is a receive method. When a message is
received from a peer p, normally the peer round state is updated
correspondingly, and some messages are passed for further processing, for
example to ConsensusState service. We now specify the processing of messages in
the receive method of Consensus reactor for each message type. In the following
message handler, `rs` and `prs` denote `RoundState` and `PeerRoundState`,
respectively.
### NewRoundStepMessage handler
```go
handleMessage(msg):
if msg is from smaller height/round/step then return
// Just remember these values.
prsHeight = prs.Height
prsRound = prs.Round
prsCatchupCommitRound = prs.CatchupCommitRound
prsCatchupCommit = prs.CatchupCommit
Update prs with values from msg
if prs.Height or prs.Round has been updated then
reset Proposal related fields of the peer state
if prs.Round has been updated and msg.Round == prsCatchupCommitRound then
prs.Precommits = psCatchupCommit
if prs.Height has been updated then
if prsHeight+1 == msg.Height && prsRound == msg.LastCommitRound then
prs.LastCommitRound = msg.LastCommitRound
prs.LastCommit = prs.Precommits
} else {
prs.LastCommitRound = msg.LastCommitRound
prs.LastCommit = nil
}
Reset prs.CatchupCommitRound and prs.CatchupCommit
```
### NewValidBlockMessage handler
```go
handleMessage(msg):
if prs.Height != msg.Height then return
if prs.Round != msg.Round && !msg.IsCommit then return
prs.ProposalBlockPartsHeader = msg.BlockPartsHeader
prs.ProposalBlockParts = msg.BlockParts
```
The number of block parts is limited to 1601 (`types.MaxBlockPartsCount`) to
protect the node against DOS attacks.
### HasVoteMessage handler
```go
handleMessage(msg):
if prs.Height == msg.Height then
prs.setHasVote(msg.Height, msg.Round, msg.Type, msg.Index)
```
### VoteSetMaj23Message handler
```go
handleMessage(msg):
if prs.Height == msg.Height then
Record in rs that a peer claim to have ⅔ majority for msg.BlockID
Send VoteSetBitsMessage showing votes node has for that BlockId
```
### ProposalMessage handler
```go
handleMessage(msg):
if prs.Height != msg.Height || prs.Round != msg.Round || prs.Proposal then return
prs.Proposal = true
if prs.ProposalBlockParts == empty set then // otherwise it is set in NewValidBlockMessage handler
prs.ProposalBlockPartsHeader = msg.BlockPartsHeader
prs.ProposalPOLRound = msg.POLRound
prs.ProposalPOL = nil
Send msg through internal peerMsgQueue to ConsensusState service
```
### ProposalPOLMessage handler
```go
handleMessage(msg):
if prs.Height != msg.Height or prs.ProposalPOLRound != msg.ProposalPOLRound then return
prs.ProposalPOL = msg.ProposalPOL
```
The number of votes is limited to 10000 (`types.MaxVotesCount`) to protect the
node against DOS attacks.
### BlockPartMessage handler
```go
handleMessage(msg):
if prs.Height != msg.Height || prs.Round != msg.Round then return
Record in prs that peer has block part msg.Part.Index
Send msg trough internal peerMsgQueue to ConsensusState service
```
### VoteMessage handler
```go
handleMessage(msg):
Record in prs that a peer knows vote with index msg.vote.ValidatorIndex for particular height and round
Send msg trough internal peerMsgQueue to ConsensusState service
```
### VoteSetBitsMessage handler
```go
handleMessage(msg):
Update prs for the bit-array of votes peer claims to have for the msg.BlockID
```
The number of votes is limited to 10000 (`types.MaxVotesCount`) to protect the
node against DOS attacks.
## Gossip Data Routine
It is used to send the following messages to the peer: `BlockPartMessage`, `ProposalMessage` and
`ProposalPOLMessage` on the DataChannel. The gossip data routine is based on the local RoundState (`rs`)
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
```go
1a) if rs.ProposalBlockPartsHeader == prs.ProposalBlockPartsHeader and the peer does not have all the proposal parts then
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(rs.Height, rs.Round, Part) to the peer on the DataChannel
if send returns true, record that the peer knows the corresponding block Part
Continue
1b) if (0 < prs.Height) and (prs.Height < rs.Height) then
help peer catch up using gossipDataForCatchup function
Continue
1c) if (rs.Height != prs.Height) or (rs.Round != prs.Round) then
Sleep PeerGossipSleepDuration
Continue
// at this point rs.Height == prs.Height and rs.Round == prs.Round
1d) if (rs.Proposal != nil and !prs.Proposal) then
Send ProposalMessage(rs.Proposal) to the peer
if send returns true, record that the peer knows Proposal
if 0 <= rs.Proposal.POLRound then
polRound = rs.Proposal.POLRound
prevotesBitArray = rs.Votes.Prevotes(polRound).BitArray()
Send ProposalPOLMessage(rs.Height, polRound, prevotesBitArray)
Continue
2) Sleep PeerGossipSleepDuration
```
### Gossip Data For Catchup
This function is responsible for helping peer catch up if it is at the smaller height (prs.Height < rs.Height).
The function executes the following logic:
```go
if peer does not have all block parts for prs.ProposalBlockPart then
blockMeta = Load Block Metadata for height prs.Height from blockStore
if (!blockMeta.BlockID.PartsHeader == prs.ProposalBlockPartsHeader) then
Sleep PeerGossipSleepDuration
return
Part = pick a random proposal block part the peer does not have
Send BlockPartMessage(prs.Height, prs.Round, Part) to the peer on the DataChannel
if send returns true, record that the peer knows the corresponding block Part
return
else Sleep PeerGossipSleepDuration
```
## Gossip Votes Routine
It is used to send the following message: `VoteMessage` on the VoteChannel.
The gossip votes routine is based on the local RoundState (`rs`)
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
```go
1a) if rs.Height == prs.Height then
if prs.Step == RoundStepNewHeight then
vote = random vote from rs.LastCommit the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
if prs.Step <= RoundStepPrevote and prs.Round != -1 and prs.Round <= rs.Round then
Prevotes = rs.Votes.Prevotes(prs.Round)
vote = random vote from Prevotes the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
if prs.Step <= RoundStepPrecommit and prs.Round != -1 and prs.Round <= rs.Round then
Precommits = rs.Votes.Precommits(prs.Round)
vote = random vote from Precommits the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
if prs.ProposalPOLRound != -1 then
PolPrevotes = rs.Votes.Prevotes(prs.ProposalPOLRound)
vote = random vote from PolPrevotes the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
1b) if prs.Height != 0 and rs.Height == prs.Height+1 then
vote = random vote from rs.LastCommit peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
1c) if prs.Height != 0 and rs.Height >= prs.Height+2 then
Commit = get commit from BlockStore for prs.Height
vote = random vote from Commit the peer does not have
Send VoteMessage(vote) to the peer
if send returns true, continue
2) Sleep PeerGossipSleepDuration
```
## QueryMaj23Routine
It is used to send the following message: `VoteSetMaj23Message`. `VoteSetMaj23Message` is sent to indicate that a given
BlockID has seen +2/3 votes. This routine is based on the local RoundState (`rs`) and the known PeerRoundState
(`prs`). The routine repeats forever the logic shown below.
```go
1a) if rs.Height == prs.Height then
Prevotes = rs.Votes.Prevotes(prs.Round)
if there is a ⅔ majority for some blockId in Prevotes then
m = VoteSetMaj23Message(prs.Height, prs.Round, Prevote, blockId)
Send m to peer
Sleep PeerQueryMaj23SleepDuration
1b) if rs.Height == prs.Height then
Precommits = rs.Votes.Precommits(prs.Round)
if there is a ⅔ majority for some blockId in Precommits then
m = VoteSetMaj23Message(prs.Height,prs.Round,Precommit,blockId)
Send m to peer
Sleep PeerQueryMaj23SleepDuration
1c) if rs.Height == prs.Height and prs.ProposalPOLRound >= 0 then
Prevotes = rs.Votes.Prevotes(prs.ProposalPOLRound)
if there is a ⅔ majority for some blockId in Prevotes then
m = VoteSetMaj23Message(prs.Height,prs.ProposalPOLRound,Prevotes,blockId)
Send m to peer
Sleep PeerQueryMaj23SleepDuration
1d) if prs.CatchupCommitRound != -1 and 0 < prs.Height and
prs.Height <= blockStore.Height() then
Commit = LoadCommit(prs.Height)
m = VoteSetMaj23Message(prs.Height,Commit.Round,Precommit,Commit.BlockID)
Send m to peer
Sleep PeerQueryMaj23SleepDuration
2) Sleep PeerQueryMaj23SleepDuration
```
## Broadcast routine
The Broadcast routine subscribes to an internal event bus to receive new round steps and votes messages, and broadcasts messages to peers upon receiving those
events.
It broadcasts `NewRoundStepMessage` or `CommitStepMessage` upon new round state event. Note that
broadcasting these messages does not depend on the PeerRoundState; it is sent on the StateChannel.
Upon receiving VoteMessage it broadcasts `HasVoteMessage` message to its peers on the StateChannel.
## Channels
Defines 4 channels: state, data, vote and vote_set_bits. Each channel
has `SendQueueCapacity` and `RecvBufferCapacity` and
`RecvMessageCapacity` set to `maxMsgSize`.
Sending incorrectly encoded data will result in stopping the peer.

+ 0
- 187
spec/reactors/consensus/consensus.md View File

@ -1,187 +0,0 @@
# Tendermint Consensus Reactor
Tendermint Consensus is a distributed protocol executed by validator processes to agree on
the next block to be added to the Tendermint blockchain. The protocol proceeds in rounds, where
each round is a try to reach agreement on the next block. A round starts by having a dedicated
process (called proposer) suggesting to other processes what should be the next block with
the `ProposalMessage`.
The processes respond by voting for a block with `VoteMessage` (there are two kinds of vote
messages, prevote and precommit votes). Note that a proposal message is just a suggestion what the
next block should be; a validator might vote with a `VoteMessage` for a different block. If in some
round, enough number of processes vote for the same block, then this block is committed and later
added to the blockchain. `ProposalMessage` and `VoteMessage` are signed by the private key of the
validator. The internals of the protocol and how it ensures safety and liveness properties are
explained in a forthcoming document.
For efficiency reasons, validators in Tendermint consensus protocol do not agree directly on the
block as the block size is big, i.e., they don't embed the block inside `Proposal` and
`VoteMessage`. Instead, they reach agreement on the `BlockID` (see `BlockID` definition in
[Blockchain](https://github.com/tendermint/spec/blob/master/spec/core/data_structures.md#blockid) section)
that uniquely identifies each block. The block itself is
disseminated to validator processes using peer-to-peer gossiping protocol. It starts by having a
proposer first splitting a block into a number of block parts, that are then gossiped between
processes using `BlockPartMessage`.
Validators in Tendermint communicate by peer-to-peer gossiping protocol. Each validator is connected
only to a subset of processes called peers. By the gossiping protocol, a validator send to its peers
all needed information (`ProposalMessage`, `VoteMessage` and `BlockPartMessage`) so they can
reach agreement on some block, and also obtain the content of the chosen block (block parts). As
part of the gossiping protocol, processes also send auxiliary messages that inform peers about the
executed steps of the core consensus algorithm (`NewRoundStepMessage` and `NewValidBlockMessage`), and
also messages that inform peers what votes the process has seen (`HasVoteMessage`,
`VoteSetMaj23Message` and `VoteSetBitsMessage`). These messages are then used in the gossiping
protocol to determine what messages a process should send to its peers.
We now describe the content of each message exchanged during Tendermint consensus protocol.
## ProposalMessage
ProposalMessage is sent when a new block is proposed. It is a suggestion of what the
next block in the blockchain should be.
```go
type ProposalMessage struct {
Proposal Proposal
}
```
### Proposal
Proposal contains height and round for which this proposal is made, BlockID as a unique identifier
of proposed block, timestamp, and POLRound (a so-called Proof-of-Lock (POL) round) that is needed for
termination of the consensus. If POLRound >= 0, then BlockID corresponds to the block that
is locked in POLRound. The message is signed by the validator private key.
```go
type Proposal struct {
Height int64
Round int
POLRound int
BlockID BlockID
Timestamp Time
Signature Signature
}
```
## VoteMessage
VoteMessage is sent to vote for some block (or to inform others that a process does not vote in the
current round). Vote is defined in the
[Blockchain](https://github.com/tendermint/spec/blob/master/spec/core/data_structures.md#blockidd)
section and contains validator's
information (validator address and index), height and round for which the vote is sent, vote type,
blockID if process vote for some block (`nil` otherwise) and a timestamp when the vote is sent. The
message is signed by the validator private key.
```go
type VoteMessage struct {
Vote Vote
}
```
## BlockPartMessage
BlockPartMessage is sent when gossiping a piece of the proposed block. It contains height, round
and the block part.
```go
type BlockPartMessage struct {
Height int64
Round int
Part Part
}
```
## NewRoundStepMessage
NewRoundStepMessage is sent for every step transition during the core consensus algorithm execution.
It is used in the gossip part of the Tendermint protocol to inform peers about a current
height/round/step a process is in.
```go
type NewRoundStepMessage struct {
Height int64
Round int
Step RoundStepType
SecondsSinceStartTime int
LastCommitRound int
}
```
## NewValidBlockMessage
NewValidBlockMessage is sent when a validator observes a valid block B in some round r,
i.e., there is a Proposal for block B and 2/3+ prevotes for the block B in the round r.
It contains height and round in which valid block is observed, block parts header that describes
the valid block and is used to obtain all
block parts, and a bit array of the block parts a process currently has, so its peers can know what
parts it is missing so they can send them.
In case the block is also committed, then IsCommit flag is set to true.
```go
type NewValidBlockMessage struct {
Height int64
Round int
BlockPartsHeader PartSetHeader
BlockParts BitArray
IsCommit bool
}
```
## ProposalPOLMessage
ProposalPOLMessage is sent when a previous block is re-proposed.
It is used to inform peers in what round the process learned for this block (ProposalPOLRound),
and what prevotes for the re-proposed block the process has.
```go
type ProposalPOLMessage struct {
Height int64
ProposalPOLRound int
ProposalPOL BitArray
}
```
## HasVoteMessage
HasVoteMessage is sent to indicate that a particular vote has been received. It contains height,
round, vote type and the index of the validator that is the originator of the corresponding vote.
```go
type HasVoteMessage struct {
Height int64
Round int
Type byte
Index int
}
```
## VoteSetMaj23Message
VoteSetMaj23Message is sent to indicate that a process has seen +2/3 votes for some BlockID.
It contains height, round, vote type and the BlockID.
```go
type VoteSetMaj23Message struct {
Height int64
Round int
Type byte
BlockID BlockID
}
```
## VoteSetBitsMessage
VoteSetBitsMessage is sent to communicate the bit-array of votes a process has seen for a given
BlockID. It contains height, round, vote type, BlockID and a bit array of
the votes a process has.
```go
type VoteSetBitsMessage struct {
Height int64
Round int
Type byte
BlockID BlockID
Votes BitArray
}
```

+ 0
- 10
spec/reactors/evidence/reactor.md View File

@ -1,10 +0,0 @@
# Evidence Reactor
## Channels
[#1503](https://github.com/tendermint/tendermint/issues/1503)
Sending invalid evidence will result in stopping the peer.
Sending incorrectly encoded data or data exceeding `maxMsgSize` will result
in stopping the peer.

+ 0
- 8
spec/reactors/mempool/concurrency.md View File

@ -1,8 +0,0 @@
# Mempool Concurrency
Look at the concurrency model this uses...
- Receiving CheckTx
- Broadcasting new tx
- Interfaces with consensus engine, reap/update while checking
- Calling the ABCI app (ordering. callbacks. how proxy works alongside the blockchain proxy which actually writes blocks)

+ 0
- 54
spec/reactors/mempool/config.md View File

@ -1,54 +0,0 @@
# Mempool Configuration
Here we describe configuration options around mempool.
For the purposes of this document, they are described
as command-line flags, but they can also be passed in as
environmental variables or in the config.toml file. The
following are all equivalent:
Flag: `--mempool.recheck=false`
Environment: `TM_MEMPOOL_RECHECK=false`
Config:
```toml
[mempool]
recheck = false
```
## Recheck
`--mempool.recheck=false` (default: true)
Recheck determines if the mempool rechecks all pending
transactions after a block was committed. Once a block
is committed, the mempool removes all valid transactions
that were successfully included in the block.
If `recheck` is true, then it will rerun CheckTx on
all remaining transactions with the new block state.
## Broadcast
`--mempool.broadcast=false` (default: true)
Determines whether this node gossips any valid transactions
that arrive in mempool. Default is to gossip anything that
passes checktx. If this is disabled, transactions are not
gossiped, but instead stored locally and added to the next
block this node is the proposer.
## WalDir
`--mempool.wal_dir=/tmp/gaia/mempool.wal` (default: $TM_HOME/data/mempool.wal)
This defines the directory where mempool writes the write-ahead
logs. These files can be used to reload unbroadcasted
transactions if the node crashes.
If the directory passed in is an absolute path, the wal file is
created there. If the directory is a relative path, the path is
appended to home directory of the tendermint process to
generate an absolute path to the wal directory
(default `$HOME/.tendermint` or set via `TM_HOME` or `--home`)

+ 0
- 43
spec/reactors/mempool/functionality.md View File

@ -1,43 +0,0 @@
# Mempool Functionality
The mempool maintains a list of potentially valid transactions,
both to broadcast to other nodes, as well as to provide to the
consensus reactor when it is selected as the block proposer.
There are two sides to the mempool state:
- External: get, check, and broadcast new transactions
- Internal: return valid transaction, update list after block commit
## External functionality
External functionality is exposed via network interfaces
to potentially untrusted actors.
- CheckTx - triggered via RPC or P2P
- Broadcast - gossip messages after a successful check
## Internal functionality
Internal functionality is exposed via method calls to other
code compiled into the tendermint binary.
- ReapMaxBytesMaxGas - get txs to propose in the next block. Guarantees that the
size of the txs is less than MaxBytes, and gas is less than MaxGas
- Update - remove tx that were included in last block
- ABCI.CheckTx - call ABCI app to validate the tx
What does it provide the consensus reactor?
What guarantees does it need from the ABCI app?
(talk about interleaving processes in concurrency)
## Optimizations
The implementation within this library also implements a tx cache.
This is so that signatures don't have to be reverified if the tx has
already been seen before.
However, we only store valid txs in the cache, not invalid ones.
This is because invalid txs could become good later.
Txs that are included in a block aren't removed from the cache,
as they still may be getting received over the p2p network.
These txs are stored in the cache by their hash, to mitigate memory concerns.

+ 0
- 52
spec/reactors/mempool/messages.md View File

@ -1,52 +0,0 @@
# Mempool Messages
## P2P Messages
There is currently only one message that Mempool broadcasts and receives over
the p2p gossip network (via the reactor): `TxsMessage`
```go
// TxsMessage is a MempoolMessage containing a list of transactions.
type TxsMessage struct {
Txs []types.Tx
}
```
## RPC Messages
Mempool exposes `CheckTx([]byte)` over the RPC interface.
It can be posted via `broadcast_commit`, `broadcast_sync` or
`broadcast_async`. They all parse a message with one argument,
`"tx": "HEX_ENCODED_BINARY"` and differ in only how long they
wait before returning (sync makes sure CheckTx passes, commit
makes sure it was included in a signed block).
Request (`POST http://gaia.zone:26657/`):
```json
{
"id": "",
"jsonrpc": "2.0",
"method": "broadcast_sync",
"params": {
"tx": "F012A4BC68..."
}
}
```
Response:
```json
{
"error": "",
"result": {
"hash": "E39AAB7A537ABAA237831742DCE1117F187C3C52",
"log": "",
"data": "",
"code": 0
},
"id": "",
"jsonrpc": "2.0"
}
```

+ 0
- 27
spec/reactors/mempool/reactor.md View File

@ -1,27 +0,0 @@
# Mempool Reactor
## Channels
See [this issue](https://github.com/tendermint/tendermint/issues/1503)
Mempool maintains a cache of the last 10000 transactions to prevent
replaying old transactions (plus transactions coming from other
validators, who are continually exchanging transactions). Read [Replay
Protection](https://github.com/tendermint/tendermint/blob/8cdaa7f515a9d366bbc9f0aff2a263a1a6392ead/docs/app-dev/app-development.md#replay-protection)
for details.
Sending incorrectly encoded data or data exceeding `maxMsgSize` will result
in stopping the peer.
`maxMsgSize` equals `MaxBatchBytes` (10MB) + 4 (proto overhead).
`MaxBatchBytes` is a mempool config parameter -> defined locally. The reactor
sends transactions to the connected peers in batches. The maximum size of one
batch is `MaxBatchBytes`.
The mempool will not send a tx back to any peer which it received it from.
The reactor assigns an `uint16` number for each peer and maintains a map from
p2p.ID to `uint16`. Each mempool transaction carries a list of all the senders
(`[]uint16`). The list is updated every time mempool receives a transaction it
is already seen. `uint16` assumes that a node will never have over 65535 active
peers (0 is reserved for unknown source - e.g. RPC).

+ 0
- 164
spec/reactors/pex/pex.md View File

@ -1,164 +0,0 @@
# Peer Strategy and Exchange
Here we outline the design of the AddressBook
and how it used by the Peer Exchange Reactor (PEX) to ensure we are connected
to good peers and to gossip peers to others.
## Peer Types
Certain peers are special in that they are specified by the user as `persistent`,
which means we auto-redial them if the connection fails, or if we fail to dial
them.
Some peers can be marked as `private`, which means
we will not put them in the address book or gossip them to others.
All peers except private peers and peers coming from them are tracked using the
address book.
The rest of our peers are only distinguished by being either
inbound (they dialed our public address) or outbound (we dialed them).
## Discovery
Peer discovery begins with a list of seeds.
When we don't have enough peers, we
1. ask existing peers
2. dial seeds if we're not dialing anyone currently
On startup, we will also immediately dial the given list of `persistent_peers`,
and will attempt to maintain persistent connections with them. If the
connections die, or we fail to dial, we will redial every 5s for a few minutes,
then switch to an exponential backoff schedule, and after about a day of
trying, stop dialing the peer. This behavior is when `persistent_peers_max_dial_period` is configured to zero.
But If `persistent_peers_max_dial_period` is set greater than zero, terms between each dial to each persistent peer
will not exceed `persistent_peers_max_dial_period` during exponential backoff.
Therefore, `dial_period` = min(`persistent_peers_max_dial_period`, `exponential_backoff_dial_period`)
and we keep trying again regardless of `maxAttemptsToDial`
As long as we have less than `MaxNumOutboundPeers`, we periodically request
additional peers from each of our own and try seeds.
## Listening
Peers listen on a configurable ListenAddr that they self-report in their
NodeInfo during handshakes with other peers. Peers accept up to
`MaxNumInboundPeers` incoming peers.
## Address Book
Peers are tracked via their ID (their PubKey.Address()).
Peers are added to the address book from the PEX when they first connect to us or
when we hear about them from other peers.
The address book is arranged in sets of buckets, and distinguishes between
vetted (old) and unvetted (new) peers. It keeps different sets of buckets for
vetted and unvetted peers. Buckets provide randomization over peer selection.
Peers are put in buckets according to their IP groups.
IP group can be a masked IP (e.g. `1.2.0.0` or `2602:100::`) or `local` for
local addresses or `unroutable` for unroutable addresses. The mask which
corresponds to the `/16` subnet is used for IPv4, `/32` subnet - for IPv6.
Each group has a limited number of buckets to prevent DoS attacks coming from
that group (e.g. an attacker buying a `/16` block of IPs and launching a DoS
attack).
[highwayhash](https://arxiv.org/abs/1612.06257) is used as a hashing function
when calculating a bucket.
When placing a peer into a new bucket:
```md
hash(key + sourcegroup + int64(hash(key + group + sourcegroup)) % bucket_per_group) % num_new_buckets
```
When placing a peer into an old bucket:
```md
hash(key + group + int64(hash(key + addr)) % buckets_per_group) % num_old_buckets
```
where `key` - random 24 HEX string, `group` - IP group of the peer (e.g. `1.2.0.0`),
`sourcegroup` - IP group of the sender (peer who sent us this address) (e.g. `174.11.0.0`),
`addr` - string representation of the peer's address (e.g. `174.11.10.2:26656`).
A vetted peer can only be in one bucket. An unvetted peer can be in multiple buckets, and
each instance of the peer can have a different IP:PORT.
If we're trying to add a new peer but there's no space in its bucket, we'll
remove the worst peer from that bucket to make room.
## Vetting
When a peer is first added, it is unvetted.
Marking a peer as vetted is outside the scope of the `p2p` package.
For Tendermint, a Peer becomes vetted once it has contributed sufficiently
at the consensus layer; ie. once it has sent us valid and not-yet-known
votes and/or block parts for `NumBlocksForVetted` blocks.
Other users of the p2p package can determine their own conditions for when a peer is marked vetted.
If a peer becomes vetted but there are already too many vetted peers,
a randomly selected one of the vetted peers becomes unvetted.
If a peer becomes unvetted (either a new peer, or one that was previously vetted),
a randomly selected one of the unvetted peers is removed from the address book.
More fine-grained tracking of peer behaviour can be done using
a trust metric (see below), but it's best to start with something simple.
## Select Peers to Dial
When we need more peers, we pick addresses randomly from the addrbook with some
configurable bias for unvetted peers. The bias should be lower when we have
fewer peers and can increase as we obtain more, ensuring that our first peers
are more trustworthy, but always giving us the chance to discover new good
peers.
We track the last time we dialed a peer and the number of unsuccessful attempts
we've made. If too many attempts are made, we mark the peer as bad.
Connection attempts are made with exponential backoff (plus jitter). Because
the selection process happens every `ensurePeersPeriod`, we might not end up
dialing a peer for much longer than the backoff duration.
If we fail to connect to the peer after 16 tries (with exponential backoff), we
remove from address book completely. But for persistent peers, we indefinitely try to
dial all persistent peers unless `persistent_peers_max_dial_period` is configured to zero
## Select Peers to Exchange
When we’re asked for peers, we select them as follows:
- select at most `maxGetSelection` peers
- try to select at least `minGetSelection` peers - if we have less than that, select them all.
- select a random, unbiased `getSelectionPercent` of the peers
Send the selected peers. Note we select peers for sending without bias for vetted/unvetted.
## Preventing Spam
There are various cases where we decide a peer has misbehaved and we disconnect from them.
When this happens, the peer is removed from the address book and black listed for
some amount of time. We call this "Disconnect and Mark".
Note that the bad behaviour may be detected outside the PEX reactor itself
(for instance, in the mconnection, or another reactor), but it must be communicated to the PEX reactor
so it can remove and mark the peer.
In the PEX, if a peer sends us an unsolicited list of peers,
or if the peer sends a request too soon after another one,
we Disconnect and MarkBad.
## Trust Metric
The quality of peers can be tracked in more fine-grained detail using a
Proportional-Integral-Derivative (PID) controller that incorporates
current, past, and rate-of-change data to inform peer quality.
While a PID trust metric has been implemented, it remains for future work
to use it in the PEX.
See the [trustmetric](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-006-trust-metric.md)
and [trustmetric useage](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-007-trust-metric-usage.md)
architecture docs for more details.

+ 0
- 12
spec/reactors/pex/reactor.md View File

@ -1,12 +0,0 @@
# PEX Reactor
## Channels
Defines only `SendQueueCapacity`. [#1503](https://github.com/tendermint/tendermint/issues/1503)
Implements rate-limiting by enforcing minimal time between two consecutive
`pexRequestMessage` requests. If the peer sends us addresses we did not ask,
it is stopped.
Sending incorrectly encoded data or data exceeding `maxMsgSize` will result
in stopping the peer.

+ 0
- 5
spec/reactors/readme.md View File

@ -1,5 +0,0 @@
---
cards: true
---
# Reactors

+ 0
- 77
spec/reactors/state_sync/reactor.md View File

@ -1,77 +0,0 @@
# State Sync Reactor
State sync allows new nodes to rapidly bootstrap and join the network by discovering, fetching,
and restoring state machine snapshots. For more information, see the [state sync ABCI section](../../abci/apps.md#state-sync).
The state sync reactor has two main responsibilites:
* Serving state machine snapshots taken by the local ABCI application to new nodes joining the
network.
* Discovering existing snapshots and fetching snapshot chunks for an empty local application
being bootstrapped.
The state sync process for bootstrapping a new node is described in detail in the section linked
above. While technically part of the reactor (see `statesync/syncer.go` and related components),
this document will only cover the P2P reactor component.
For details on the ABCI methods and data types, see the [ABCI documentation](../../abci/abci.md).
## State Sync P2P Protocol
When a new node begin state syncing, it will ask all peers it encounters if it has any
available snapshots:
```go
type snapshotsRequestMessage struct{}
```
The receiver will query the local ABCI application via `ListSnapshots`, and send a message
containing snapshot metadata (limited to 4 MB) for each of the 10 most recent snapshots:
```go
type snapshotsResponseMessage struct {
Height uint64
Format uint32
Chunks uint32
Hash []byte
Metadata []byte
}
```
The node running state sync will offer these snapshots to the local ABCI application via
`OfferSnapshot` ABCI calls, and keep track of which peers contain which snapshots. Once a snapshot
is accepted, the state syncer will request snapshot chunks from appropriate peers:
```go
type chunkRequestMessage struct {
Height uint64
Format uint32
Index uint32
}
```
The receiver will load the requested chunk from its local application via `LoadSnapshotChunk`,
and respond with it (limited to 16 MB):
```go
type chunkResponseMessage struct {
Height uint64
Format uint32
Index uint32
Chunk []byte
Missing bool
}
```
Here, `Missing` is used to signify that the chunk was not found on the peer, since an empty
chunk is a valid (although unlikely) response.
The returned chunk is given to the ABCI application via `ApplySnapshotChunk` until the snapshot
is restored. If a chunk response is not returned within some time, it will be re-requested,
possibly from a different peer.
The ABCI application is able to request peer bans and chunk refetching as part of the ABCI protocol.
If no state sync is in progress (i.e. during normal operation), any unsolicited response messages
are discarded.

Loading…
Cancel
Save