Browse Source

Fastsync spec from tendermint-rs (#157)

* fastsync spec from tendermint-rs

* fixed broken link

* fixed linting

* more fixes

* markdown lint

* move fast_sync to rust-spec

Co-authored-by: Marko Baricevic <marbar3778@yahoo.com>
pull/7804/head
Josef Widder 4 years ago
committed by GitHub
parent
commit
606abc7fc0
No known key found for this signature in database GPG Key ID: 4AEE18F83AFDEB23
24 changed files with 3948 additions and 0 deletions
  1. +50
    -0
      rust-spec/fastsync/README.md
  2. +113
    -0
      rust-spec/fastsync/Tinychain.tla
  3. +1216
    -0
      rust-spec/fastsync/fastsync.md
  4. +825
    -0
      rust-spec/fastsync/fastsync.tla
  5. +606
    -0
      rust-spec/fastsync/scheduler.tla
  6. +13
    -0
      rust-spec/fastsync/verification/001bmc-apalache.csv
  7. +9
    -0
      rust-spec/fastsync/verification/002tlc-tlc.csv
  8. +10
    -0
      rust-spec/fastsync/verification/MC-CorrectBlocksInv.cfg
  9. +10
    -0
      rust-spec/fastsync/verification/MC-CorrectNeverSuspectedInv.cfg
  10. +10
    -0
      rust-spec/fastsync/verification/MC-Sync1AsInv.cfg
  11. +10
    -0
      rust-spec/fastsync/verification/MC-Sync2AsInv.cfg
  12. +10
    -0
      rust-spec/fastsync/verification/MC-Sync3AsInv.cfg
  13. +10
    -0
      rust-spec/fastsync/verification/MC-SyncFromCorrectInv.cfg
  14. +10
    -0
      rust-spec/fastsync/verification/MC-Termination.cfg
  15. +10
    -0
      rust-spec/fastsync/verification/MC-TerminationByTO.cfg
  16. +10
    -0
      rust-spec/fastsync/verification/MC-TerminationCorr.cfg
  17. +17
    -0
      rust-spec/fastsync/verification/MC_1_0_4.tla
  18. +15
    -0
      rust-spec/fastsync/verification/MC_1_1_4.tla
  19. +15
    -0
      rust-spec/fastsync/verification/MC_1_2_4.tla
  20. +15
    -0
      rust-spec/fastsync/verification/MC_1_2_5.tla
  21. +15
    -0
      rust-spec/fastsync/verification/MC_1_3_5.tla
  22. +17
    -0
      rust-spec/fastsync/verification/MC_2_0_4.tla
  23. +110
    -0
      rust-spec/fastsync/verification/Tinychain.tla
  24. +822
    -0
      rust-spec/fastsync/verification/fastsync_apalache.tla

+ 50
- 0
rust-spec/fastsync/README.md View File

@ -0,0 +1,50 @@
# Fast Sync Subprotocol Specification
This directory contains English and TLA+ specifications for the FastSync
protocol as it is currently implemented in the Tendermint Core codebase.
## English Specification
The [English Specification](fastsync.md) provides a detailed description of the
fast sync problem and the properties a correct protocol must satisfy. It also
includes a detailed description of the protocol as currently implemented in Go,
and an anlaysis of the implementation with respect to the properties.
It was found that the current implementation does not satisfy certain
properties, and is therefore not a correct solution to the fast sync problem.
The issue discovered holds for all previous implementations of the protocol. A
fix is described which is straight forward to implement.
## TLA+ Specification
Two TLA+ specifications are provided: a high level [specification
of the protocol](fastsync.tla) and a low level [specification of the scheduler
component of the implementation](scheduler.tla). Both specifications contain
properties that may be checked by the TLC model checker, though only for small
values of the relevant parameters.
We will continue to refine these specifications in our research work,
to deduplicate
the redundancies between them, improve their utility to researchers and
engineers, and to improve their verifiability. For now, they provide a complete
description of the fast sync protocol in TLA+; especially the
[scheduler.tla](scheduler.tla), which maps very closely to the current
implementation of the [scheduler in Go](https://github.com/tendermint/tendermint/blob/master/blockchain/v2/scheduler.go).
The [scheduler.tla](scheduler.tla) can be model checked in TLC with the following
parameters:
- Constants:
- numRequests <- 2
- PeerIDs <- 0..2
- ultimateHeight <- 3
- Invariants:
- TypeOK
- Properties:
- TerminationWhenNoAdvance
- TerminationGoodPeers
- TerminationAllCases
- Proofs that properties are not vacuously true:
- TerminationGoodPeersPre
- TerminationAllCases
- SchedulerIncreasePre

+ 113
- 0
rust-spec/fastsync/Tinychain.tla View File

@ -0,0 +1,113 @@
-------------------------- MODULE Tinychain ----------------------------------
(* A very abstract model of Tendermint blockchain. Its only purpose is to highlight
the relation between validator sets, next validator sets, and last commits.
*)
EXTENDS Integers
\* type annotation
a <: b == a
\* the type of validator sets, e.g., STRING
VST == STRING
\* LastCommit type.
\* It contains:
\* 1) the flag of whether the block id equals to the hash of the previous block, and
\* 2) the set of the validators who have committed the block.
\* In the implementation, blockId is the hash of the previous block, which cannot be forged.
\* We abstract block id into whether it equals to the hash of the previous block or not.
LCT == [blockIdEqRef |-> BOOLEAN, committers |-> VST]
\* Block type.
\* A block contains its height, validator set, next validator set, and last commit.
\* Moreover, it contains the flag that tells us whether the block is equal to the one
\* on the reference chain (this is an abstraction of hash).
BT == [height |-> Int, hashEqRef |-> BOOLEAN, wellFormed |-> BOOLEAN,
VS |-> VST, NextVS |-> VST, lastCommit |-> LCT]
CONSTANTS
(*
A set of abstract values, each value representing a set of validators.
For the purposes of this specification, they can be any values,
e.g., "s1", "s2", etc.
*)
VALIDATOR_SETS,
(* a nil validator set that is outside of VALIDATOR_SETS *)
NIL_VS,
(* The maximal height, up to which the blockchain may grow *)
MAX_HEIGHT
Heights == 1..MAX_HEIGHT
\* the set of all potential commits
Commits == [blockIdEqRef: BOOLEAN, committers: VALIDATOR_SETS]
\* the set of all potential blocks, not necessarily coming from the blockchain
Blocks ==
[height: Heights, hashEqRef: BOOLEAN, wellFormed: BOOLEAN,
VS: VALIDATOR_SETS, NextVS: VALIDATOR_SETS, lastCommit: Commits]
\* Does the chain contain a sound sequence of blocks that could be produced by
\* a 2/3 of faulty validators. This operator can be used to initialise the chain!
\* Since we are abstracting validator sets with VALIDATOR_SETS, which are
\* 2/3 quorums, we just compare committers to those sets. In a more detailed
\* specification, one would write the \subseteq operator instead of equality.
IsCorrectChain(chain) ==
\* restrict the structure of the blocks, to decrease the TLC search space
LET OkCommits == [blockIdEqRef: {TRUE}, committers: VALIDATOR_SETS]
OkBlocks == [height: Heights, hashEqRef: {TRUE}, wellFormed: {TRUE},
VS: VALIDATOR_SETS, NextVS: VALIDATOR_SETS, lastCommit: OkCommits]
IN
/\ chain \in [1..MAX_HEIGHT -> OkBlocks]
/\ \A h \in 1..MAX_HEIGHT:
LET b == chain[h] IN
/\ b.height = h \* the height is correct
/\ h > 1 =>
LET p == chain[h - 1] IN
/\ b.VS = p.NextVS \* the validators propagate from the previous block
/\ b.lastCommit.committers = p.VS \* and they are the committers
\* The basic properties of blocks on the blockchain:
\* They should pass the validity check and they may verify the next block.
\* Does the block pass the consistency check against the next validators of the previous block
IsMatchingValidators(block, nextVS) ==
\* simply check that the validator set is propagated correctly.
\* (the implementation tests hashes and the application state)
block.VS = nextVS
\* Does the block verify the commit (of the next block)
PossibleCommit(block, commit) ==
\* the commits are signed by the block validators
/\ commit.committers = block.VS
\* The block id in the commit matches the block hash (abstract comparison).
\* (The implementation has extensive tests for that.)
\* this is an abstraction of: commit.blockId = hash(block)
\*
\* These are possible scenarios on the concrete hashes:
\*
\* scenario 1: commit.blockId = 10 /\ hash(block) = 10 /\ hash(ref) = 10
\* scenario 2: commit.blockId = 20 /\ hash(block) = 20 /\ block.VS /= ref.VS
\* scenario 3: commit.blockId = 50 /\ hash(block) = 100
\* scenario 4: commit.blockId = 10 /\ hash(block) = 100
\* scenario 5: commit.blockId = 100 /\ hash(block) = 10
/\ commit.blockIdEqRef = block.hashEqRef
\* the following test would be cheating, as we do not have access to the
\* reference chain:
\* /\ commit.blockIdEqRef
\* Basic invariants
\* every block has the validator set that is chosen by its predecessor
ValidBlockInv(chain) ==
\A h \in 2..MAX_HEIGHT:
IsMatchingValidators(chain[h], chain[h - 1].NextVS)
\* last commit of every block is signed by the validators of the predecessor
VerifiedBlockInv(chain) ==
\A h \in 2..MAX_HEIGHT:
PossibleCommit(chain[h - 1], chain[h].lastCommit)
==================================================================================

+ 1216
- 0
rust-spec/fastsync/fastsync.md
File diff suppressed because it is too large
View File


+ 825
- 0
rust-spec/fastsync/fastsync.tla View File

@ -0,0 +1,825 @@
----------------------------- MODULE fastsync -----------------------------
(*
In this document we give the high level specification of the fast sync
protocol as implemented here:
https://github.com/tendermint/tendermint/tree/master/blockchain/v2.
We assume a system in which one node is trying to sync with the blockchain
(replicated state machine) by downloading blocks from the set of full nodes
(we call them peers) that are block providers, and executing transactions
(part of the block) against the application.
Peers can be faulty, and we don't make any assumption about the rate of correct/faulty
nodes in the node peerset (i.e., they can all be faulty). Correct peers are part
of the replicated state machine, i.e., they manage blockchain and execute
transactions against the same deterministic application. We don't make any
assumptions about the behavior of faulty processes. Processes (client and peers)
communicate by message passing.
In this specification, we model this system with two parties:
- the node (state machine) that is doing fastsync and
- the environment with which node interacts.
The environment consists of the set of (correct and faulty) peers with
which node interacts as part of fast sync protocol, but also contains some
aspects (adding/removing peers, timeout mechanism) that are part of the node
local environment (could be seen as part of the runtime in which node
executes).
As part of the fast sync protocol a node and the peers exchange the following messages:
- StatusRequest
- StatusResponse
- BlockRequest
- BlockResponse
- NoBlockResponse.
A node is periodically issuing StatusRequests to query peers for their current height (to decide what
blocks to ask from what peers). Based on StatusResponses (that are sent by peers), the node queries
blocks for some height(s) by sending peers BlockRequest messages. A peer provides a requested block by
BlockResponse message. If a peer does not want to provide a requested block, then it sends NoBlockResponse message.
In addition to those messages, a node in this spec receives additional input messages (events):
- AddPeer
- RemovePeer
- SyncTimeout.
These are the control messages that are provided to the node by its execution enviornment. AddPeer
is for the case when a connection is established with a peer; similarly RemovePeer is for the case
a connection with the peer is terminated. Finally SyncTimeout is used to model a timeout trigger.
We assume that fast sync protocol starts when connections with some number of peers
are established. Therefore, peer set is initialised with non-empty set of peer ids. Note however
that node does not know initially the peer heights.
*)
EXTENDS Integers, FiniteSets, Sequences
CONSTANTS MAX_HEIGHT, \* the maximal height of blockchain
VALIDATOR_SETS, \* abstract set of validators
NIL_VS, \* a nil validator set
CORRECT, \* set of correct peers
FAULTY, \* set of faulty peers
TARGET_PENDING, \* maximum number of pending requests + downloaded blocks that are not yet processed
PEER_MAX_REQUESTS \* maximum number of pending requests per peer
ASSUME CORRECT \intersect FAULTY = {}
ASSUME TARGET_PENDING > 0
ASSUME PEER_MAX_REQUESTS > 0
\* the blockchain, see Tinychain
VARIABLE chain
\* introduce tiny chain as the source of blocks for the correct nodes
INSTANCE Tinychain
\* a special value for an undefined height
NilHeight == 0
\* the height of the genesis block
TrustedHeight == 1
\* the set of all peer ids the node can receive a message from
AllPeerIds == CORRECT \union FAULTY
\* Correct last commit have enough voting power, i.e., +2/3 of the voting power of
\* the corresponding validator set (enoughVotingPower = TRUE) that signs blockId.
\* BlockId defines correct previous block (in the implementation it is the hash of the block).
\* Instead of blockId, we encode blockIdEqRef, which is true, if the block id is equal
\* to the hash of the previous block, see Tinychain.
CorrectLastCommit(h) == chain[h].lastCommit
NilCommit == [blockIdEqRef |-> FALSE, committers |-> NIL_VS]
\* correct node always supplies the blocks from the blockchain
CorrectBlock(h) == chain[h]
NilBlock ==
[height |-> 0, hashEqRef |-> FALSE, wellFormed |-> FALSE,
lastCommit |-> NilCommit, VS |-> NIL_VS, NextVS |-> NIL_VS]
\* a special value for an undefined peer
NilPeer == "Nil" \* STRING for apalache efficiency
\* control the state of the syncing node
States == { "running", "finished"}
NoMsg == [type |-> "None"]
\* the variables of the node running fastsync
VARIABLES
state, \* running or finished
(*
blockPool [
height, \* current height we are trying to sync. Last block executed is height - 1
peerIds, \* set of peers node is connected to
peerHeights, \* map of peer ids to its (stated) height
blockStore, \* map of heights to (received) blocks
receivedBlocks, \* map of heights to peer that has sent us the block (stored in blockStore)
pendingBlocks, \* map of heights to peer to which block request has been sent
syncHeight, \* height at the point syncTimeout was triggered last time
syncedBlocks \* number of blocks synced since last syncTimeout. If it is 0 when the next timeout occurs, then protocol terminates.
]
*)
blockPool
\* the variables of the peers providing blocks
VARIABLES
(*
peersState [
peerHeights, \* track peer heights
statusRequested, \* boolean set to true when StatusRequest is received. Models periodic sending of StatusRequests.
blocksRequested \* set of BlockRequests received that are not answered yet
]
*)
peersState
\* the variables for the network and scheduler
VARIABLES
turn, \* who is taking the turn: "Peers" or "Node"
inMsg, \* a node receives message by this variable
outMsg \* a node sends a message by this variable
(* the variables of the node *)
nvars == <<state, blockPool>>
(*************** Type definitions for Apalache (model checker) **********************)
AsIntSet(S) == S <: {Int}
\* type of process ids
PIDT == STRING
AsPidSet(S) == S <: {PIDT}
\* ControlMessage type
CMT == [type |-> STRING, peerId |-> PIDT] \* type of control messages
\* InMsg type
IMT == [type |-> STRING, peerId |-> PIDT, height |-> Int, block |-> BT]
AsInMsg(m) == m <: IMT
AsInMsgSet(S) == S <: {IMT}
\* OutMsg type
OMT == [type |-> STRING, peerId |-> PIDT, height |-> Int]
AsOutMsg(m) == m <: OMT
AsOutMsgSet(S) == S <: {OMT}
\* block pool type
BPT == [height |-> Int, peerIds |-> {PIDT}, peerHeights |-> [PIDT -> Int],
blockStore |-> [Int -> BT], receivedBlocks |-> [Int -> PIDT],
pendingBlocks |-> [Int -> PIDT], syncedBlocks |-> Int, syncHeight |-> Int]
AsBlockPool(bp) == bp <: BPT
(******************** Sets of messages ********************************)
\* Control messages
ControlMsgs ==
AsInMsgSet([type: {"addPeer"}, peerId: AllPeerIds])
\union
AsInMsgSet([type: {"removePeer"}, peerId: AllPeerIds])
\union
AsInMsgSet([type: {"syncTimeout"}])
\* All messages (and events) received by a node
InMsgs ==
AsInMsgSet({NoMsg})
\union
AsInMsgSet([type: {"blockResponse"}, peerId: AllPeerIds, block: Blocks])
\union
AsInMsgSet([type: {"noBlockResponse"}, peerId: AllPeerIds, height: Heights])
\union
AsInMsgSet([type: {"statusResponse"}, peerId: AllPeerIds, height: Heights])
\union
ControlMsgs
\* Messages sent by a node and received by peers (environment in our case)
OutMsgs ==
AsOutMsgSet({NoMsg})
\union
AsOutMsgSet([type: {"statusRequest"}]) \* StatusRequest is broadcast to the set of connected peers.
\union
AsOutMsgSet([type: {"blockRequest"}, peerId: AllPeerIds, height: Heights])
(********************************** NODE ***********************************)
InitNode ==
\E pIds \in SUBSET AllPeerIds: \* set of peers node established initial connections with
/\ pIds \subseteq CORRECT \* this line is not necessary
/\ pIds /= AsPidSet({}) \* apalache better checks non-emptiness than subtracts from SUBSET
/\ blockPool = AsBlockPool([
height |-> TrustedHeight + 1, \* the genesis block is at height 1
syncHeight |-> TrustedHeight + 1, \* and we are synchronized to it
peerIds |-> pIds,
peerHeights |-> [p \in AllPeerIds |-> NilHeight], \* no peer height is known
blockStore |->
[h \in Heights |->
IF h > TrustedHeight THEN NilBlock ELSE chain[1]],
receivedBlocks |-> [h \in Heights |-> NilPeer],
pendingBlocks |-> [h \in Heights |-> NilPeer],
syncedBlocks |-> -1
])
/\ state = "running"
\* Remove faulty peers.
\* Returns new block pool.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L222
RemovePeers(rmPeers, bPool) ==
LET keepPeers == bPool.peerIds \ rmPeers IN
LET pHeights ==
[p \in AllPeerIds |-> IF p \in rmPeers THEN NilHeight ELSE bPool.peerHeights[p]] IN
LET failedRequests ==
{h \in Heights: /\ h >= bPool.height
/\ \/ bPool.pendingBlocks[h] \in rmPeers
\/ bPool.receivedBlocks[h] \in rmPeers} IN
LET pBlocks ==
[h \in Heights |-> IF h \in failedRequests THEN NilPeer ELSE bPool.pendingBlocks[h]] IN
LET rBlocks ==
[h \in Heights |-> IF h \in failedRequests THEN NilPeer ELSE bPool.receivedBlocks[h]] IN
LET bStore ==
[h \in Heights |-> IF h \in failedRequests THEN NilBlock ELSE bPool.blockStore[h]] IN
IF keepPeers /= bPool.peerIds
THEN [bPool EXCEPT
!.peerIds = keepPeers,
!.peerHeights = pHeights,
!.pendingBlocks = pBlocks,
!.receivedBlocks = rBlocks,
!.blockStore = bStore
]
ELSE bPool
\* Add a peer.
\* see https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L198
AddPeer(peer, bPool) ==
[bPool EXCEPT !.peerIds = bPool.peerIds \union {peer}]
(*
Handle StatusResponse message.
If valid status response, update peerHeights.
If invalid (height is smaller than the current), then remove peer.
Returns new block pool.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L667
*)
HandleStatusResponse(msg, bPool) ==
LET peerHeight == bPool.peerHeights[msg.peerId] IN
IF /\ msg.peerId \in bPool.peerIds
/\ msg.height >= peerHeight
THEN \* a correct response
LET pHeights == [bPool.peerHeights EXCEPT ![msg.peerId] = msg.height] IN
[bPool EXCEPT !.peerHeights = pHeights]
ELSE RemovePeers({msg.peerId}, bPool) \* the peer has sent us message with smaller height or peer is not in our peer list
(*
Handle BlockResponse message.
If valid block response, update blockStore, pendingBlocks and receivedBlocks.
If invalid (unsolicited response or malformed block), then remove peer.
Returns new block pool.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L522
*)
HandleBlockResponse(msg, bPool) ==
LET h == msg.block.height IN
IF /\ msg.peerId \in bPool.peerIds
/\ bPool.blockStore[h] = NilBlock
/\ bPool.pendingBlocks[h] = msg.peerId
/\ msg.block.wellFormed
THEN
[bPool EXCEPT
!.blockStore = [bPool.blockStore EXCEPT ![h] = msg.block],
!.receivedBlocks = [bPool.receivedBlocks EXCEPT![h] = msg.peerId],
!.pendingBlocks = [bPool.pendingBlocks EXCEPT![h] = NilPeer]
]
ELSE RemovePeers({msg.peerId}, bPool)
HandleNoBlockResponse(msg, bPool) ==
RemovePeers({msg.peerId}, bPool)
\* Compute max peer height.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L440
MaxPeerHeight(bPool) ==
IF bPool.peerIds = AsPidSet({})
THEN 0 \* no peers, just return 0
ELSE LET Hts == {bPool.peerHeights[p] : p \in bPool.peerIds} IN
CHOOSE max \in Hts: \A h \in Hts: h <= max
(* Returns next height for which request should be sent.
Returns NilHeight in case there is no height for which request can be sent.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L454 *)
FindNextRequestHeight(bPool) ==
LET S == {i \in Heights:
/\ i >= bPool.height
/\ i <= MaxPeerHeight(bPool)
/\ bPool.blockStore[i] = NilBlock
/\ bPool.pendingBlocks[i] = NilPeer} IN
IF S = AsIntSet({})
THEN NilHeight
ELSE
CHOOSE min \in S: \A h \in S: h >= min
\* Returns number of pending requests for a given peer.
NumOfPendingRequests(bPool, peer) ==
LET peerPendingRequests ==
{h \in Heights:
/\ h >= bPool.height
/\ bPool.pendingBlocks[h] = peer
}
IN
Cardinality(peerPendingRequests)
(* Returns peer that can serve block for a given height.
Returns NilPeer in case there are no such peer.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L477 *)
FindPeerToServe(bPool, h) ==
LET peersThatCanServe == { p \in bPool.peerIds:
/\ bPool.peerHeights[p] >= h
/\ NumOfPendingRequests(bPool, p) < PEER_MAX_REQUESTS } IN
LET pendingBlocks ==
{i \in Heights:
/\ i >= bPool.height
/\ \/ bPool.pendingBlocks[i] /= NilPeer
\/ bPool.blockStore[i] /= NilBlock
} IN
IF \/ peersThatCanServe = AsPidSet({})
\/ Cardinality(pendingBlocks) >= TARGET_PENDING
THEN NilPeer
\* pick a peer that can serve request for height h that has minimum number of pending requests
ELSE CHOOSE p \in peersThatCanServe: \A q \in peersThatCanServe:
/\ NumOfPendingRequests(bPool, p) <= NumOfPendingRequests(bPool, q)
\* Make a request for a block (if possible) and return a request message and block poool.
CreateRequest(bPool) ==
LET nextHeight == FindNextRequestHeight(bPool) IN
IF nextHeight = NilHeight THEN [msg |-> AsOutMsg(NoMsg), pool |-> bPool]
ELSE
LET peer == FindPeerToServe(bPool, nextHeight) IN
IF peer = NilPeer THEN [msg |-> AsOutMsg(NoMsg), pool |-> bPool]
ELSE
LET m == [type |-> "blockRequest", peerId |-> peer, height |-> nextHeight] IN
LET newPool == [bPool EXCEPT
!.pendingBlocks = [bPool.pendingBlocks EXCEPT ![nextHeight] = peer]
] IN
[msg |-> m, pool |-> newPool]
\* Returns node state, i.e., defines termination condition.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L432
ComputeNextState(bPool) ==
IF bPool.syncedBlocks = 0 \* corresponds to the syncTimeout in case no progress has been made for a period of time.
THEN "finished"
ELSE IF /\ bPool.height > 1
/\ bPool.height >= MaxPeerHeight(bPool) \* see https://github.com/tendermint/tendermint/blob/61057a8b0af2beadee106e47c4616b279e83c920/blockchain/v2/scheduler.go#L566
THEN "finished"
ELSE "running"
(* Verify if commit is for the given block id and if commit has enough voting power.
See https://github.com/tendermint/tendermint/blob/61057a8b0af2beadee106e47c4616b279e83c920/blockchain/v2/processor_context.go#L12 *)
VerifyCommit(block, lastCommit) ==
PossibleCommit(block, lastCommit)
(* Tries to execute next block in the pool, i.e., defines block validation logic.
Returns new block pool (peers that has send invalid blocks are removed).
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/processor.go#L135 *)
ExecuteBlocks(bPool) ==
LET bStore == bPool.blockStore IN
LET block0 == bStore[bPool.height - 1] IN
\* blockPool is initialized with height = TrustedHeight + 1,
\* so bStore[bPool.height - 1] is well defined
LET block1 == bStore[bPool.height] IN
LET block2 == bStore[bPool.height + 1] IN
IF block1 = NilBlock \/ block2 = NilBlock
THEN bPool \* we don't have two next consecutive blocks
ELSE IF ~IsMatchingValidators(block1, block0.NextVS)
\* Check that block1.VS = block0.Next.
\* Otherwise, CorrectBlocksInv fails.
\* In the implementation NextVS is part of the application state,
\* so a mismatch can be found without access to block0.NextVS.
THEN \* the block does not have the expected validator set
RemovePeers({bPool.receivedBlocks[bPool.height]}, bPool)
ELSE IF ~VerifyCommit(block1, block2.lastCommit)
\* Verify commit of block2 based on block1.
\* Interestingly, we do not have to call IsMatchingValidators.
THEN \* remove the peers of block1 and block2, as they are considered faulty
RemovePeers({bPool.receivedBlocks[bPool.height],
bPool.receivedBlocks[bPool.height + 1]},
bPool)
ELSE \* all good, execute block at position height
[bPool EXCEPT !.height = bPool.height + 1]
\* Defines logic for pruning peers.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L613
TryPrunePeer(bPool, suspectedSet, isTimedOut) ==
(* -----------------------------------------------------------------------------------------------------------------------*)
(* Corresponds to function prunablePeers in scheduler.go file. Note that this function only checks if block has been *)
(* received from a peer during peerTimeout period. *)
(* Note that in case no request has been scheduled to a correct peer, or a request has been scheduled *)
(* recently, so the peer hasn't responded yet, a peer will be removed as no block is received within peerTimeout. *)
(* In case of faulty peers, we don't have any guarantee that they will respond. *)
(* Therefore, we model this with nondeterministic behavior as it could lead to peer removal, for both correct and faulty. *)
(* See scheduler.go *)
(* https://github.com/tendermint/tendermint/blob/4298bbcc4e25be78e3c4f21979d6aa01aede6e87/blockchain/v2/scheduler.go#L335 *)
LET toRemovePeers == bPool.peerIds \intersect suspectedSet IN
(*
Corresponds to logic for pruning a peer that is responsible for delivering block for the next height.
The pruning logic for the next height is based on the time when a BlockRequest is sent. Therefore, if a request is sent
to a correct peer for the next height (blockPool.height), it should never be removed by this check as we assume that
correct peers respond timely and reliably. However, if a request is sent to a faulty peer then we
might get response on time or not, which is modelled with nondeterministic isTimedOut flag.
See scheduler.go
https://github.com/tendermint/tendermint/blob/4298bbcc4e25be78e3c4f21979d6aa01aede6e87/blockchain/v2/scheduler.go#L617
*)
LET nextHeightPeer == bPool.pendingBlocks[bPool.height] IN
LET prunablePeers ==
IF /\ nextHeightPeer /= NilPeer
/\ nextHeightPeer \in FAULTY
/\ isTimedOut
THEN toRemovePeers \union {nextHeightPeer}
ELSE toRemovePeers
IN
RemovePeers(prunablePeers, bPool)
\* Handle SyncTimeout. It models if progress has been made (height has increased) since the last SyncTimeout event.
HandleSyncTimeout(bPool) ==
[bPool EXCEPT
!.syncedBlocks = bPool.height - bPool.syncHeight,
!.syncHeight = bPool.height
]
HandleResponse(msg, bPool) ==
IF msg.type = "blockResponse" THEN
HandleBlockResponse(msg, bPool)
ELSE IF msg.type = "noBlockResponse" THEN
HandleNoBlockResponse(msg, bPool)
ELSE IF msg.type = "statusResponse" THEN
HandleStatusResponse(msg, bPool)
ELSE IF msg.type = "addPeer" THEN
AddPeer(msg.peerId, bPool)
ELSE IF msg.type = "removePeer" THEN
RemovePeers({msg.peerId}, bPool)
ELSE IF msg.type = "syncTimeout" THEN
HandleSyncTimeout(bPool)
ELSE
bPool
(*
At every node step we executed the following steps (atomically):
1) input message is consumed and the corresponding handler is called,
2) pruning logic is called
3) block execution is triggered (we try to execute block at next height)
4) a request to a peer is made (if possible) and
5) we decide if termination condition is satisifed so we stop.
*)
NodeStep ==
\E suspectedSet \in SUBSET AllPeerIds: \* suspectedSet is a nondeterministic set of peers
\E isTimedOut \in BOOLEAN:
LET bPool == HandleResponse(inMsg, blockPool) IN
LET bp == TryPrunePeer(bPool, suspectedSet, isTimedOut) IN
LET nbPool == ExecuteBlocks(bp) IN
LET msgAndPool == CreateRequest(nbPool) IN
LET nstate == ComputeNextState(msgAndPool.pool) IN
/\ state' = nstate
/\ blockPool' = msgAndPool.pool
/\ outMsg' = msgAndPool.msg
/\ inMsg' = AsInMsg(NoMsg)
\* If node is running, then in every step we try to create blockRequest.
\* In addition, input message (if exists) is consumed and processed.
NextNode ==
\/ /\ state = "running"
/\ NodeStep
\/ /\ state = "finished"
/\ UNCHANGED <<nvars, inMsg, outMsg>>
(********************************** Peers ***********************************)
InitPeers ==
\E pHeights \in [AllPeerIds -> Heights]:
peersState = [
peerHeights |-> pHeights,
statusRequested |-> FALSE,
blocksRequested |-> AsOutMsgSet({})
]
HandleStatusRequest(msg, pState) ==
[pState EXCEPT
!.statusRequested = TRUE
]
HandleBlockRequest(msg, pState) ==
[pState EXCEPT
!.blocksRequested = pState.blocksRequested \union AsOutMsgSet({msg})
]
HandleRequest(msg, pState) ==
IF msg = AsOutMsg(NoMsg)
THEN pState
ELSE IF msg.type = "statusRequest"
THEN HandleStatusRequest(msg, pState)
ELSE HandleBlockRequest(msg, pState)
CreateStatusResponse(peer, pState, anyHeight) ==
LET m ==
IF peer \in CORRECT
THEN AsInMsg([type |-> "statusResponse", peerId |-> peer, height |-> pState.peerHeights[peer]])
ELSE AsInMsg([type |-> "statusResponse", peerId |-> peer, height |-> anyHeight]) IN
[msg |-> m, peers |-> pState]
CreateBlockResponse(msg, pState, arbitraryBlock) ==
LET m ==
IF msg.peerId \in CORRECT
THEN AsInMsg([type |-> "blockResponse", peerId |-> msg.peerId, block |-> CorrectBlock(msg.height)])
ELSE AsInMsg([type |-> "blockResponse", peerId |-> msg.peerId, block |-> arbitraryBlock]) IN
LET npState ==
[pState EXCEPT
!.blocksRequested = pState.blocksRequested \ {msg}
] IN
[msg |-> m, peers |-> npState]
GrowPeerHeight(pState) ==
\E p \in CORRECT:
/\ pState.peerHeights[p] < MAX_HEIGHT
/\ peersState' = [pState EXCEPT !.peerHeights[p] = @ + 1]
/\ inMsg' = AsInMsg(NoMsg)
SendStatusResponseMessage(pState) ==
/\ \E arbitraryHeight \in Heights:
\E peer \in AllPeerIds:
LET msgAndPeers == CreateStatusResponse(peer, pState, arbitraryHeight) IN
/\ peersState' = msgAndPeers.peers
/\ inMsg' = msgAndPeers.msg
SendAddPeerMessage ==
\E peer \in AllPeerIds:
inMsg' = AsInMsg([type |-> "addPeer", peerId |-> peer])
SendRemovePeerMessage ==
\E peer \in AllPeerIds:
inMsg' = AsInMsg([type |-> "removePeer", peerId |-> peer])
SendSyncTimeoutMessage ==
inMsg' = AsInMsg([type |-> "syncTimeout"])
SendControlMessage ==
\/ SendAddPeerMessage
\/ SendRemovePeerMessage
\/ SendSyncTimeoutMessage
\* An extremely important property of block hashes (blockId):
\* If the block hash coincides with the hash of the reference block,
\* then the blocks should be equal.
UnforgeableBlockId(height, block) ==
block.hashEqRef => block = chain[height]
\* A faulty peer cannot forge enough of the validators signatures.
\* In other words: If a commit contains enough signatures from the validators (in reality 2/3, in the model all),
\* then the blockID points to the block on the chain, encoded as block.lastCommit.blockIdEqRef being true
\* A more precise rule should have checked that the commiters have over 2/3 of the VS's voting power.
NoFork(height, block) ==
(height > 1 /\ block.lastCommit.committers = chain[height - 1].VS)
=> block.lastCommit.blockIdEqRef
\* Can be block produced by a faulty peer, assuming it cannot generate forks (basic assumption of the protocol)
IsBlockByFaulty(height, block) ==
/\ block.height = height
/\ UnforgeableBlockId(height, block)
/\ NoFork(height, block)
SendBlockResponseMessage(pState) ==
\* a response to a requested block: either by a correct, or by a faulty peer
\/ /\ pState.blocksRequested /= AsOutMsgSet({})
/\ \E msg \in pState.blocksRequested:
\E block \in Blocks:
/\ IsBlockByFaulty(msg.height, block)
/\ LET msgAndPeers == CreateBlockResponse(msg, pState, block) IN
/\ peersState' = msgAndPeers.peers
/\ inMsg' = msgAndPeers.msg
\* a faulty peer can always send an unsolicited block
\/ \E peerId \in FAULTY:
\E block \in Blocks:
/\ IsBlockByFaulty(block.height, block)
/\ peersState' = pState
/\ inMsg' = AsInMsg([type |-> "blockResponse",
peerId |-> peerId, block |-> block])
SendNoBlockResponseMessage(pState) ==
/\ peersState' = pState
/\ inMsg' \in AsInMsgSet([type: {"noBlockResponse"}, peerId: FAULTY, height: Heights])
SendResponseMessage(pState) ==
\/ SendBlockResponseMessage(pState)
\/ SendNoBlockResponseMessage(pState)
\/ SendStatusResponseMessage(pState)
NextEnvStep(pState) ==
\/ SendResponseMessage(pState)
\/ GrowPeerHeight(pState)
\/ SendControlMessage /\ peersState' = pState
\* note that we propagate pState that was missing in the previous version
\* Peers consume a message and update it's local state. It then makes a single step, i.e., it sends at most single message.
\* Message sent could be either a response to a request or faulty message (sent by faulty processes).
NextPeers ==
LET pState == HandleRequest(outMsg, peersState) IN
/\ outMsg' = AsOutMsg(NoMsg)
/\ NextEnvStep(pState)
\* the composition of the node, the peers, the network and scheduler
Init ==
/\ IsCorrectChain(chain) \* initialize the blockchain
/\ InitNode
/\ InitPeers
/\ turn = "Peers"
/\ inMsg = AsInMsg(NoMsg)
/\ outMsg = AsOutMsg([type |-> "statusRequest"])
Next ==
IF turn = "Peers"
THEN
/\ NextPeers
/\ turn' = "Node"
/\ UNCHANGED <<nvars, chain>>
ELSE
/\ NextNode
/\ turn' = "Peers"
/\ UNCHANGED <<peersState, chain>>
FlipTurn ==
turn' =
IF turn = "Peers" THEN
"Node"
ELSE
"Peers"
\* Compute max peer height. Used as a helper operator in properties.
MaxCorrectPeerHeight(bPool) ==
LET correctPeers == {p \in bPool.peerIds: p \in CORRECT} IN
IF correctPeers = AsPidSet({})
THEN 0 \* no peers, just return 0
ELSE LET Hts == {bPool.peerHeights[p] : p \in correctPeers} IN
CHOOSE max \in Hts: \A h \in Hts: h <= max
\* properties to check
TypeOK ==
/\ state \in States
/\ inMsg \in InMsgs
/\ outMsg \in OutMsgs
/\ turn \in {"Peers", "Node"}
/\ peersState \in [
peerHeights: [AllPeerIds -> Heights \union {NilHeight}],
statusRequested: BOOLEAN,
blocksRequested:
SUBSET
[type: {"blockRequest"}, peerId: AllPeerIds, height: Heights]
]
/\ blockPool \in [
height: Heights,
peerIds: SUBSET AllPeerIds,
peerHeights: [AllPeerIds -> Heights \union {NilHeight}],
blockStore: [Heights -> Blocks \union {NilBlock}],
receivedBlocks: [Heights -> AllPeerIds \union {NilPeer}],
pendingBlocks: [Heights -> AllPeerIds \union {NilPeer}],
syncedBlocks: Heights \union {NilHeight, -1},
syncHeight: Heights
]
(* Incorrect synchronization: The last block may be never received *)
Sync1 ==
[](state = "finished" =>
blockPool.height >= MaxCorrectPeerHeight(blockPool))
Sync1AsInv ==
state = "finished" => blockPool.height >= MaxCorrectPeerHeight(blockPool)
(* Incorrect synchronization, as there may be a timeout *)
Sync2 ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ [] (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
Sync2AsInv ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
(* Correct synchronization *)
Sync3 ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ blockPool.syncedBlocks <= 0 \* timeout
\/ [] (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
Sync3AsInv ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ blockPool.syncedBlocks <= 0 \* timeout
\/ (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
(* Naive termination *)
\* This property is violated, as the faulty peers may produce infinitely many responses
Termination ==
WF_turn(FlipTurn) => <>(state = "finished")
(* Termination by timeout: the protocol terminates, if there is a timeout *)
\* the precondition: fair flip turn and eventual timeout when no new blocks were synchronized
TerminationByTOPre ==
/\ WF_turn(FlipTurn)
/\ <>(inMsg.type = "syncTimeout" /\ blockPool.height <= blockPool.syncHeight)
TerminationByTO ==
TerminationByTOPre => <>(state = "finished")
(* The termination property when we only have correct peers *)
\* as correct peers may spam the node with addPeer, removePeer, and statusResponse,
\* we have to enforce eventual response (there are no queues in our spec)
CorrBlockResponse ==
\A h \in Heights:
[](outMsg.type = "blockRequest" /\ outMsg.height = h
=> <>(inMsg.type = "blockResponse" /\ inMsg.block.height = h))
\* a precondition for termination in presence of only correct processes
TerminationCorrPre ==
/\ FAULTY = AsPidSet({})
/\ WF_turn(FlipTurn)
/\ CorrBlockResponse
\* termination when there are only correct processes
TerminationCorr ==
TerminationCorrPre => <>(state = "finished")
\* All synchronized blocks (but the last one) are exactly like in the reference chain
CorrectBlocksInv ==
\/ state /= "finished"
\/ \A h \in 1..(blockPool.height - 1):
blockPool.blockStore[h] = chain[h]
\* A false expectation that the protocol only finishes with the blocks
\* from the processes that had not been suspected in being faulty
SyncFromCorrectInv ==
\/ state /= "finished"
\/ \A h \in 1..blockPool.height:
blockPool.receivedBlocks[h] \in blockPool.peerIds \union {NilPeer}
\* A false expectation that a correct process is never removed from the set of peer ids.
\* A correct process may reply too late and then gets evicted.
CorrectNeverSuspectedInv ==
CORRECT \subseteq blockPool.peerIds
BlockPoolInvariant ==
\A h \in Heights:
\* waiting for a block to arrive
\/ /\ blockPool.receivedBlocks[h] = NilPeer
/\ blockPool.blockStore[h] = NilBlock
\* valid block is received and is present in the store
\/ /\ blockPool.receivedBlocks[h] /= NilPeer
/\ blockPool.blockStore[h] /= NilBlock
/\ blockPool.pendingBlocks[h] = NilPeer
(* a few simple properties that trigger counterexamples *)
\* Shows execution in which peer set is empty
PeerSetIsNeverEmpty == blockPool.peerIds /= AsPidSet({})
\* Shows execution in which state = "finished" and MaxPeerHeight is not equal to 1
StateNotFinished ==
state /= "finished" \/ MaxPeerHeight(blockPool) = 1
=============================================================================
\*=============================================================================
\* Modification History
\* Last modified Fri May 29 20:41:53 CEST 2020 by igor
\* Last modified Thu Apr 16 16:57:22 CEST 2020 by zarkomilosevic
\* Created Tue Feb 04 10:36:18 CET 2020 by zarkomilosevic

+ 606
- 0
rust-spec/fastsync/scheduler.tla View File

@ -0,0 +1,606 @@
------------------------------- MODULE scheduler -------------------------------
(*
A specification of the fast sync scheduler that is introduced in blockchain/v2:
https://github.com/tendermint/tendermint/tree/brapse/blockchain-v2-riri-reactor-2
The model includes:
- a scheduler that maintains the peers and blocks that it receives from the peers, and
- one environment simulating a correct peer
This specification focuses on the events that are received and produced by the scheduler.
Communication between the scheduler and the other fastsync components is not specified.
*)
EXTENDS Integers, FiniteSets
\* the protocol parameters
CONSTANTS
PeerIDs, \* potential peer ids, a set of integers, e.g. 0..2
ultimateHeight, \* the maximum height of the blockchain, an integer, e.g. 3
numRequests \* the maximum number of requests made when scheduling new blocks, e.g. 2
\* a few definitions
None == -1 \* an undefined value
Heights == 0..ultimateHeight \* potential heights
noErr == "errNone"
Errors == {
noErr, "errPeerNotFound", "errDelRemovedPeer", "errAddDuplicatePeer",
"errUpdateRemovedPeer", "errAfterPeerRemove", "errBadPeer", "errBadPeerState",
"errProcessedBlockEv", "finished", "timeout", "errAddRemovedPeer", "errPeerNoBlock", "errBadBlockState"}
PeerStates == {"peerStateUnknown", "peerStateNew", "peerStateReady", "peerStateRemoved"}
BlockStates == {
"blockStateUnknown", "blockStateNew", "blockStatePending",
"blockStateReceived", "blockStateProcessed"}
\* basic stuff
Min(a, b) == IF a < b THEN a ELSE b
Max(a, b) == IF a > b THEN a ELSE b
\* the state of the scheduler:
VARIABLE turn \* who makes a step: the scheduler or the environment
\* the state of the reactor:
VARIABLES inEvent, \* an event from the environment to the scheduler
envRunning \* a Boolean, negation of stopProcessing in the implementation
\* the state of the scheduler:
VARIABLES outEvent, \* an event from the scheduler to the environment
scRunning
\* the block pool:
VARIABLE scheduler
(*
scheduler is a record that contains:
height: Int,
height of the next block to collect
peers: PeerIDs,
the set of peers that have connected in the past, may include removed peers
peerHeights: [PeerIDs -> Heights],
a map to collect the peer heights, >0 if peer in ready state, None (-1) otherwise
peerStates: [PeerIDs -> PeerStates],
a map to record the peer states
blockStates: [Heights -> BlockStates]
a set of heights for which blocks are to be scheduled, pending or received
pendingBlocks: [Heights -> PeerIDs],
a set of heights for which blocks are to be scheduled or pending
receivedBlocks: [Heights -> PeerIDs],
a set of heights for which blocks were received but not yet processed
blocks: Heights,
the full set of blocks requested or downloaded by the scheduler
*)
vars == <<turn, envRunning, inEvent, scRunning, outEvent, scheduler>>
\* for now just keep the height in the block
Blocks == [ height: Heights ]
noEvent == [type |-> "NoEvent"]
InEvents ==
{noEvent} \cup
[type: {"rTrySchedule", "tNoAdvanceExp"}] \cup
[type: {"bcStatusResponse"}, peerID: PeerIDs, height: Heights] \cup
[type: {"bcBlockResponse"}, peerID: PeerIDs, height: Heights, block: Blocks] \cup
[type: {"bcNoBlockResponse"}, peerID: PeerIDs, height: Heights] \cup
[type: {"pcBlockProcessed"}, peerID: PeerIDs, height: Heights] \cup
[type: {"pcBlockVerificationFailure"}, height: Heights, firstPeerID: PeerIDs, secondPeerID: PeerIDs] \cup
[type: {"bcAddNewPeer"}, peerID: PeerIDs] \cup
[type: {"bcRemovePeer"}, peerID: PeerIDs]
\* Output events produced by the scheduler.
\* Note: in v2 the status request is done by the reactor/ environment
OutEvents ==
{noEvent} \cup
[type: {"scPeerError"}, peerID: PeerIDs, error: Errors] \cup
[type: {"scSchedulerFail"}, error: Errors] \cup
[type: {"scBlockRequest"}, peerID: PeerIDs, height: Heights] \cup
[type: {"scBlockReceived"}, peerID: PeerIDs, block: Blocks] \cup
[type: {"scPeersPruned"}, pruned: SUBSET [peerID: PeerIDs]] \cup
[type: {"scFinishedEv"}, error: Errors]
(* ----------------------------------------------------------------------------------------------*)
(* The behavior of the scheduler that keeps track of peers, block requests and responses, etc. *)
(* See scheduler.go *)
(* https://github.com/tendermint/tendermint/blob/v0.33.3/blockchain/v2/scheduler.go *)
(* ----------------------------------------------------------------------------------------------*)
addPeer(sc, peerID) ==
IF peerID \in sc.peers THEN
[err |-> "errAddDuplicatePeer", val |-> sc]
ELSE IF sc.peerStates[peerID] = "peerStateRemoved" THEN
[err |-> "errAddRemovedPeer", val |-> sc]
ELSE
LET newPeers == sc.peers \cup { peerID } IN
LET newPeerHeights == [sc.peerHeights EXCEPT ![peerID] = None] IN
LET newPeerStates == [sc.peerStates EXCEPT ![peerID] = "peerStateNew"] IN
LET newSc == [sc EXCEPT
!.peers = newPeers,
!.peerHeights = newPeerHeights,
!.peerStates = newPeerStates] IN
[err |-> noErr, val |-> newSc]
maxHeight(states, heights) ==
LET activePeers == {p \in DOMAIN states: states[p] = "peerStateReady"} IN
IF activePeers = {} THEN
0 \* no peers, just return 0
ELSE
CHOOSE max \in { heights[p] : p \in activePeers }:
\A p \in activePeers: heights[p] <= max \* max is the maximum
maxHeightScheduler(sc) ==
maxHeight(sc.peerStates, sc.peerHeights)
removePeer(sc, peerID) ==
IF peerID \notin sc.peers THEN
[err |-> "errPeerNotFound", val |-> sc]
ELSE IF sc.peerStates[peerID] = "peerStateRemoved" THEN
[err |-> "errDelRemovedPeer", val |-> sc]
ELSE
LET newSc == [sc EXCEPT
!.peerHeights[peerID] = None,
!.peerStates[peerID] = "peerStateRemoved",
\* remove all blocks from peerID and block requests to peerID, see scheduler.removePeer
!.blockStates = [h \in Heights |->
IF sc.pendingBlocks[h] = peerID \/ sc.receivedBlocks[h] = peerID THEN "blockStateNew"
ELSE sc.blockStates[h]],
!.pendingBlocks = [h \in Heights |-> IF sc.pendingBlocks[h] = peerID THEN None ELSE sc.pendingBlocks[h]],
!.receivedBlocks = [h \in Heights |-> IF sc.receivedBlocks[h] = peerID THEN None ELSE sc.receivedBlocks[h]]
] IN
[err |-> noErr, val |-> newSc]
addNewBlocks(sc, newMph) ==
\* add new blocks to be requested (e.g. when overall max peer height has changed)
IF Cardinality(sc.blocks) >= numRequests THEN
[newBlocks |-> sc.blocks, newBlockStates |-> sc.blockStates]
ELSE
LET requestSet == sc.height.. Min(numRequests+sc.height, newMph) IN
LET heightsToRequest == {h \in requestSet: sc.blockStates[h] = "blockStateUnknown"} IN
LET newBlockStates == [
h \in Heights |-> IF h \in heightsToRequest THEN "blockStateNew"
ELSE sc.blockStates[h]] IN
[newBlocks |-> heightsToRequest, newBlockStates |-> newBlockStates]
\* Update the peer height (the peer should have been previously added)
setPeerHeight(sc, peerID, height) ==
IF peerID \notin sc.peers THEN
[err |-> "errPeerNotFound", val |-> sc]
ELSE IF sc.peerStates[peerID] = "peerStateRemoved" THEN
[err |-> "errUpdateRemovedPeer", val |-> sc]
ELSE IF height < sc.peerHeights[peerID] THEN (* The peer is corrupt? Remove the peer. *)
removePeer(sc, peerID)
ELSE
LET newPeerHeights == [sc.peerHeights EXCEPT ![peerID] = height] IN \* set the peer's height
LET newPeerStates == [sc.peerStates EXCEPT ![peerID] = "peerStateReady"] IN \* set the peer's state
LET newMph == maxHeight(newPeerStates, newPeerHeights) IN
LET res == addNewBlocks(sc, newMph) IN
LET newSc == [sc EXCEPT
!.peerHeights = newPeerHeights, !.peerStates = newPeerStates,
!.blocks = res.newBlocks, !.blockStates = res.newBlockStates] IN
[err |-> noErr, val |-> newSc]
nextHeightToSchedule(sc) ==
LET toBeScheduled == {h \in DOMAIN sc.blockStates: sc.blockStates[h] = "blockStateNew"} \cup {ultimateHeight+1} IN
CHOOSE minH \in toBeScheduled: \A h \in toBeScheduled: h >= minH
getStateAtHeight(sc, h) ==
IF h < sc.height THEN
"blockStateProcessed"
ELSE IF h \in DOMAIN sc.blockStates THEN
sc.blockStates[h]
ELSE
"blockStateUnknown"
markPending(sc, peerID, h) ==
IF getStateAtHeight(sc, h) /= "blockStateNew" THEN
[err |-> "errBadBlockState", val |-> sc]
ELSE IF peerID \notin sc.peers \/ sc.peerStates[peerID] /= "peerStateReady" THEN
[err |-> "errBadPeerState", val |-> sc]
ELSE IF h > sc.peerHeights[peerID] THEN
[err |-> "errPeerTooShort", val |-> sc]
ELSE
LET newSc == [sc EXCEPT
!.blockStates = [sc.blockStates EXCEPT ![h] = "blockStatePending"],
!.pendingBlocks = [sc.pendingBlocks EXCEPT ![h] = peerID]] IN
[err |-> noErr, val |-> newSc]
markReceived(sc, peerID, h) ==
IF peerID \notin sc.peers \/ sc.peerStates[peerID] /= "peerStateReady" THEN
[err |-> "errBadPeerState", val |-> sc]
ELSE IF getStateAtHeight(sc, h) /= "blockStatePending" \/ sc.pendingBlocks[h] /= peerID THEN
[err |-> "errBadPeer", val |-> sc]
ELSE
LET newSc == [sc EXCEPT
!.blockStates = [sc.blockStates EXCEPT ![h] = "blockStateReceived"],
!.pendingBlocks = [sc.pendingBlocks EXCEPT ![h] = None],
!.receivedBlocks = [sc.receivedBlocks EXCEPT ![h] = peerID]] IN
[err |-> noErr, val |-> newSc]
markProcessed(sc, h) ==
IF getStateAtHeight(sc, h) /= "blockStateReceived" THEN
[err |-> "errProcessedBlockEv", val |-> sc]
ELSE
LET newSc == [sc EXCEPT
!.blockStates = [sc.blockStates EXCEPT ![h] = "blockStateProcessed"],
!.receivedBlocks = [sc.receivedBlocks EXCEPT ![h] = None],
!.height = sc.height + 1] IN
[err |-> noErr, val |-> newSc]
reachedMaxHeight(sc) ==
IF sc.peers = {} THEN
FALSE
ELSE
LET maxH == maxHeightScheduler(sc) IN
maxH > 0 /\ (sc.height >= maxH)
highPeers(sc, minH) == {p \in sc.peers: sc.peerHeights[p] >= minH}
(* ----------------------------------------------------------------------------------------------*)
(* The behavior of the scheduler state machine *)
(* See scheduler.go *)
(* https://github.com/tendermint/tendermint/tree/brapse/blockchain-v2-riri-reactor-2/scheduler.go*)
(* ----------------------------------------------------------------------------------------------*)
blStateInit(h, start) ==
IF h <= start THEN
"blockStateProcessed"
ELSE "blockStateUnknown"
InitSc ==
/\ scRunning = TRUE
/\ outEvent = noEvent
/\ \E startHeight \in Heights:
scheduler = [
initHeight |-> startHeight,
height |-> startHeight + 1,
peers |-> {},
peerHeights |-> [p \in PeerIDs |-> None],
peerStates |-> [p \in PeerIDs |-> "peerStateUnknown"],
blocks |-> {},
blockStates |-> [h \in Heights |-> blStateInit(h, startHeight)],
pendingBlocks |-> [h \in Heights |-> None],
receivedBlocks |-> [h \in Heights |-> None]
]
handleAddNewPeer ==
/\ inEvent.type = "bcAddNewPeer"
/\ LET res == addPeer(scheduler, inEvent.peerID) IN
IF res.err /= noErr THEN
/\ outEvent' = [type |-> "scSchedulerFail", error |-> res.err]
/\ UNCHANGED <<scheduler>>
ELSE
/\ scheduler' = res.val
/\ UNCHANGED outEvent
/\ UNCHANGED scRunning
finishSc(event) ==
event.type = "scFinishedEv" /\ event.error = "finished"
handleRemovePeer ==
/\ inEvent.type = "bcRemovePeer"
/\ LET res == removePeer(scheduler, inEvent.peerID) IN
IF res.err /= noErr THEN
/\ outEvent' = [type |-> "scSchedulerFail", error |-> res.err]
/\ UNCHANGED scheduler
ELSE
/\ scheduler' = res.val
/\ IF reachedMaxHeight(scheduler') THEN
outEvent' = [type |-> "scFinishedEv", error |-> "errAfterPeerRemove"]
ELSE
UNCHANGED outEvent
/\ IF finishSc(outEvent') THEN
scRunning' = FALSE
ELSE UNCHANGED scRunning
handleStatusResponse ==
/\ inEvent.type = "bcStatusResponse"
/\ LET res == setPeerHeight(scheduler, inEvent.peerID, inEvent.height) IN
IF res.err /= noErr THEN
/\ outEvent' = [type |-> "scPeerError", peerID |-> inEvent.peerID, error |-> res.err]
/\ UNCHANGED scheduler
ELSE
/\ scheduler' = res.val
/\ UNCHANGED outEvent
/\ UNCHANGED scRunning
handleTrySchedule == \* every 10 ms, but our spec is asynchronous
/\ inEvent.type = "rTrySchedule"
/\ LET minH == nextHeightToSchedule(scheduler) IN
IF minH = ultimateHeight+1 THEN
/\ outEvent' = noEvent
/\ UNCHANGED scheduler
ELSE IF minH = ultimateHeight+1 THEN
/\ outEvent' = noEvent
/\ UNCHANGED scheduler
ELSE
/\ LET hp == highPeers(scheduler, minH) IN
IF hp = {} THEN
/\ outEvent' = noEvent
/\ UNCHANGED scheduler
ELSE \E bestPeerID \in hp:
/\ LET res == markPending(scheduler, bestPeerID, minH) IN
/\ IF res.err /= noErr THEN
outEvent' = [type |-> "scSchedulerFail", error|-> res.err]
ELSE
outEvent' = [type |-> "scBlockRequest", peerID |-> bestPeerID, height |-> minH]
/\ scheduler' = res.val
/\ UNCHANGED scRunning
handleBlockResponse ==
/\ inEvent.type = "bcBlockResponse"
/\ LET res == markReceived(scheduler, inEvent.peerID, inEvent.height) IN
IF res.err /= noErr THEN
LET res1 == removePeer(scheduler, inEvent.peerID) IN
/\ outEvent' = [type |-> "scPeerError", peerID |-> inEvent.peerID, error |-> res.err]
/\ scheduler' = res1.val
ELSE
/\ outEvent' = [type |-> "scBlockReceived", peerID |-> inEvent.peerID, block |-> inEvent.block]
/\ scheduler' = res.val
/\ UNCHANGED scRunning
handleNoBlockResponse ==
/\ inEvent.type = "bcNoBlockResponse"
/\ IF (scheduler.peers = {} \/ scheduler.peerStates[inEvent.peerID] = "peerStateRemoved") THEN
/\ outEvent' = noEvent
/\ UNCHANGED scheduler
ELSE
LET res == removePeer(scheduler, inEvent.peerID) IN
/\ outEvent' = [type |-> "scPeerError", peerID |-> inEvent.peerID, error |-> "errPeerNoBlock"]
/\ scheduler' = res.val
/\ UNCHANGED scRunning
handleBlockProcessed ==
/\ inEvent.type = "pcBlockProcessed"
/\ IF inEvent.height /= scheduler.height THEN
/\ outEvent' = [type |-> "scSchedulerFail", error |-> "errProcessedBlockEv"]
/\ UNCHANGED scheduler
ELSE
LET res == markProcessed(scheduler, inEvent.height) IN
IF res.err /= noErr THEN
/\ outEvent' = [type |-> "scSchedulerFail", error |-> res.err]
/\ UNCHANGED scheduler
ELSE
/\ scheduler' = res.val
/\ IF reachedMaxHeight(scheduler') THEN
outEvent' = [type |-> "scFinishedEv", error |-> "finished"]
ELSE
outEvent' = noEvent
/\ IF finishSc(outEvent') THEN
scRunning' = FALSE
ELSE UNCHANGED scRunning
handleBlockProcessError ==
/\ inEvent.type = "pcBlockVerificationFailure"
/\ IF scheduler.peers = {} THEN
/\ outEvent' = noEvent
/\ UNCHANGED scheduler
ELSE
LET res1 == removePeer(scheduler, inEvent.firstPeerID) IN
LET res2 == removePeer(res1.val, inEvent.secondPeerID) IN
/\ IF reachedMaxHeight(res2.val) THEN
outEvent' = [type |-> "scFinishedEv", error |-> "finished"]
ELSE
outEvent' = noEvent
/\ scheduler' = res2.val
/\ IF finishSc(outEvent') THEN
scRunning' = FALSE
ELSE UNCHANGED scRunning
handleNoAdvanceExp ==
/\ inEvent.type = "tNoAdvanceExp"
/\ outEvent' = [type |-> "scFinishedEv", error |-> "timeout"]
/\ scRunning' = FALSE
/\ UNCHANGED <<scheduler>>
NextSc ==
IF ~scRunning THEN
UNCHANGED <<outEvent, scRunning, scheduler>>
ELSE
\/ handleStatusResponse
\/ handleAddNewPeer
\/ handleRemovePeer
\/ handleTrySchedule
\/ handleBlockResponse
\/ handleNoBlockResponse
\/ handleBlockProcessed
\/ handleBlockProcessError
\/ handleNoAdvanceExp
(* ----------------------------------------------------------------------------------------------*)
(* The behavior of the environment. *)
(* ----------------------------------------------------------------------------------------------*)
InitEnv ==
/\ inEvent = noEvent
/\ envRunning = TRUE
OnGlobalTimeoutTicker ==
/\ inEvent' = [type |-> "tNoAdvanceExp"]
/\ envRunning' = FALSE
OnTrySchedule ==
/\ inEvent' = [type |-> "rTrySchedule"]
/\ UNCHANGED envRunning
OnAddPeerEv ==
/\ inEvent' \in [type: {"bcAddNewPeer"}, peerID: PeerIDs]
/\ UNCHANGED envRunning
OnStatusResponseEv ==
\* any status response can come from the blockchain, pick one non-deterministically
/\ inEvent' \in [type: {"bcStatusResponse"}, peerID: PeerIDs, height: Heights]
/\ UNCHANGED envRunning
OnBlockResponseEv ==
\* any block response can come from the blockchain, pick one non-deterministically
/\ inEvent' \in [type: {"bcBlockResponse"}, peerID: PeerIDs, height: Heights, block: Blocks]
/\ UNCHANGED envRunning
OnNoBlockResponseEv ==
\* any no block response can come from the blockchain, pick one non-deterministically
/\ inEvent' \in [type: {"bcNoBlockResponse"}, peerID: PeerIDs, height: Heights]
/\ UNCHANGED envRunning
OnRemovePeerEv ==
\* although bcRemovePeer admits an arbitrary set, we produce just a singleton
/\ inEvent' \in [type: {"bcRemovePeer"}, peerID: PeerIDs]
/\ UNCHANGED envRunning
OnPcBlockProcessed ==
/\ inEvent' \in [type: {"pcBlockProcessed"}, peerID: PeerIDs, height: Heights]
/\ UNCHANGED envRunning
OnPcBlockVerificationFailure ==
/\ inEvent' \in [type: {"pcBlockVerificationFailure"}, firstPeerID: PeerIDs, secondPeerID: PeerIDs, height: Heights]
/\ UNCHANGED envRunning
\* messages from scheduler
OnScFinishedEv ==
/\ outEvent.type = "scFinishedEv"
/\ envRunning' = FALSE \* stop the env
/\ UNCHANGED inEvent
NextEnv ==
IF ~envRunning THEN
UNCHANGED <<inEvent, envRunning>>
ELSE
\/ OnScFinishedEv
\/ OnGlobalTimeoutTicker
\/ OnAddPeerEv
\/ OnTrySchedule
\/ OnStatusResponseEv
\/ OnBlockResponseEv
\/ OnNoBlockResponseEv
\/ OnRemovePeerEv
\/ OnPcBlockProcessed
\/ OnPcBlockVerificationFailure
(* ----------------------------------------------------------------------------------------------*)
(* The system is the composition of the environment and the schedule *)
(* ----------------------------------------------------------------------------------------------*)
Init == turn = "environment" /\ InitEnv /\ InitSc
FlipTurn ==
turn' = (
IF turn = "scheduler" THEN
"environment"
ELSE
"scheduler"
)
\* scheduler and environment alternate their steps (synchronous composition introduces more states)
Next ==
/\ FlipTurn
/\ IF turn = "scheduler" THEN
/\ NextSc
/\ inEvent' = noEvent
/\ UNCHANGED envRunning
ELSE
/\ NextEnv
/\ outEvent' = noEvent
/\ UNCHANGED <<scRunning, scheduler>>
Spec == Init /\ [][Next]_vars /\ WF_turn(FlipTurn)
(* ----------------------------------------------------------------------------------------------*)
(* Invariants *)
(* ----------------------------------------------------------------------------------------------*)
TypeOK ==
/\ turn \in {"scheduler", "environment"}
/\ inEvent \in InEvents
/\ envRunning \in BOOLEAN
/\ outEvent \in OutEvents
/\ scheduler \in [
initHeight: Heights,
height: Heights \cup {ultimateHeight + 1},
peers: SUBSET PeerIDs,
peerHeights: [PeerIDs -> Heights \cup {None}],
peerStates: [PeerIDs -> PeerStates],
blocks: SUBSET Heights,
blockStates: [Heights -> BlockStates],
pendingBlocks: [Heights -> PeerIDs \cup {None}],
receivedBlocks: [Heights -> PeerIDs \cup {None}]
]
(* ----------------------------------------------------------------------------------------------*)
(* Helpers for Properties *)
(* ----------------------------------------------------------------------------------------------*)
NoFailuresAndTimeouts ==
/\ inEvent.type /= "bcRemovePeer"
/\ inEvent.type /= "bcNoBlockResponse"
/\ inEvent.type /= "pcBlockVerificationFailure"
\* simulate good peer behavior using this formula. Useful to show termination in the presence of good peers.
GoodResponse ==
\/ inEvent.type
\in {"bcAddNewPeer", "bcStatusResponse", "bcBlockResponse", "pcBlockProcessed", "rTrySchedule"}
\/ ~envRunning
\* all blocks from initHeight up to max peer height have been processed
AllRequiredBlocksProcessed ==
LET maxH == Max(scheduler.height, maxHeightScheduler(scheduler)) IN
LET processedBlocks == {h \in scheduler.initHeight.. maxH-1: scheduler.blockStates[h] = "blockStateProcessed"} IN
scheduler.height >= maxH /\ Cardinality(processedBlocks) = scheduler.height - scheduler.initHeight
IncreaseHeight ==
(scheduler'.height > scheduler.height) \/ (scheduler.height >= ultimateHeight)
(* ----------------------------------------------------------------------------------------------*)
(* Expected properties *)
(* ----------------------------------------------------------------------------------------------*)
(* *)
(* 1. TerminationWhenNoAdvance - termination if there are no correct peers. *)
(* The model assumes the "noAdvance" timer expires and the "tNoAdvanceExp" event is received *)
(* *)
TerminationWhenNoAdvance ==
(inEvent.type = "tNoAdvanceExp")
=> <>(outEvent.type = "scFinishedEv" /\ outEvent.error = "timeout")
(* ----------------------------------------------------------------------------------------------*)
(* *)
(* 2. TerminationGoodPeers - *)
(* termination when IncreaseHeight holds true, fastsync is progressing, all blocks processed *)
(* *)
TerminationGoodPeers ==
(/\ scheduler.height < ultimateHeight
/\ <>[]GoodResponse
/\[]<>(<<IncreaseHeight>>_<<scheduler, turn>>)
)
=> <>(outEvent.type = "scFinishedEv" /\ AllRequiredBlocksProcessed)
(* This property is violated. It shows that the precondition of TerminationGoodPeers is not *)
(* always FALSE *)
TerminationGoodPeersPre ==
(/\ scheduler.height < ultimateHeight
/\ <>[]GoodResponse
/\[]<>(<<IncreaseHeight>>_<<scheduler, turn>>)
)
=> FALSE
(* ----------------------------------------------------------------------------------------------*)
(* 3. TerminationAllCases - *)
(* any peer behavior, either terminates with all blocks processed or times out *)
TerminationAllCases ==
(/\ scheduler.height < ultimateHeight
/\(([]<> (<<IncreaseHeight>>_<<scheduler, turn>>)) \/ <>(inEvent.type = "tNoAdvanceExp"))
)
=> <>(outEvent.type = "scFinishedEv" /\ (AllRequiredBlocksProcessed \/ outEvent.error = "timeout"))
(* This property is violated. It shows that the precondition of TerminationAllCases is not *)
(* always FALSE *)
TerminationAllCasesPre ==
(/\ scheduler.height < ultimateHeight
/\(([]<> (<<IncreaseHeight>>_<<scheduler, turn>>)) \/ <>(inEvent.type = "tNoAdvanceExp"))
)
=> FALSE
(* This property is violated. TLC output shows an example of increasing heights in the scheduler *)
SchedulerIncreasePre ==
[]<>(<<IncreaseHeight>>_<<scheduler, turn>>)
=> FALSE
=============================================================================
\* Modification History
\* Last modified Thu Apr 16 13:21:33 CEST 2020 by ancaz
\* Created Sat Feb 08 13:12:30 CET 2020 by ancaz

+ 13
- 0
rust-spec/fastsync/verification/001bmc-apalache.csv View File

@ -0,0 +1,13 @@
no,filename,tool,timeout,init,inv,next,args
1,MC_1_1_4.tla,apalache,1h,,Sync1AsInv,,--length=20
2,MC_1_1_4.tla,apalache,1h,,Sync2AsInv,,--length=20
3,MC_1_1_4.tla,apalache,4h,,Sync3AsInv,,--length=20
4,MC_1_1_4.tla,apalache,8h,,CorrectBlocksInv,,--length=16
5,MC_1_1_4.tla,apalache,1h,,SyncFromCorrectInv,,--length=20
6,MC_1_1_4.tla,apalache,1h,,CorrectNeverSuspectedInv,,--length=20
7,MC_1_2_4.tla,apalache,1h,,Sync1AsInv,,--length=20
8,MC_1_2_4.tla,apalache,1h,,Sync2AsInv,,--length=20
9,MC_1_2_4.tla,apalache,4h,,Sync3AsInv,,--length=20
10,MC_1_2_4.tla,apalache,8h,,CorrectBlocksInv,,--length=16
11,MC_1_2_4.tla,apalache,1h,,SyncFromCorrectInv,,--length=20
12,MC_1_2_4.tla,apalache,1h,,CorrectNeverSuspectedInv,,--length=20

+ 9
- 0
rust-spec/fastsync/verification/002tlc-tlc.csv View File

@ -0,0 +1,9 @@
no,filename,tool,timeout,init,inv,next,args
1,MC_1_1_4.tla,tlc,1h,,,,-workers 4 -config MC-Sync1AsInv.cfg
2,MC_1_1_4.tla,tlc,1h,,,,-workers 4 -config MC-Sync2AsInv.cfg
3,MC_1_1_4.tla,tlc,24h,,,,-workers 4 -config MC-Sync3AsInv.cfg
4,MC_1_1_4.tla,tlc,2h,,Termination,,-workers 4 -config MC-Termination.cfg
5,MC_1_1_4.tla,tlc,24h,,TerminationByTO,,-workers 4 -config MC-TerminationByTO.cfg
6,MC_1_1_4.tla,tlc,24h,,,,-workers 4 -config MC-CorrectBlocksInv.cfg
7,MC_1_1_4.tla,tlc,1h,,,,-workers 4 -config MC-SyncFromCorrectInv.cfg
8,MC_1_1_4.tla,tlc,1h,,,,-workers 4 -config MC-CorrectNeverSuspectedInv.cfg

+ 10
- 0
rust-spec/fastsync/verification/MC-CorrectBlocksInv.cfg View File

@ -0,0 +1,10 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
CorrectBlocksInv
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 10
- 0
rust-spec/fastsync/verification/MC-CorrectNeverSuspectedInv.cfg View File

@ -0,0 +1,10 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
CorrectNeverSuspectedInv
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 10
- 0
rust-spec/fastsync/verification/MC-Sync1AsInv.cfg View File

@ -0,0 +1,10 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
Sync1AsInv
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 10
- 0
rust-spec/fastsync/verification/MC-Sync2AsInv.cfg View File

@ -0,0 +1,10 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
Sync2AsInv
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 10
- 0
rust-spec/fastsync/verification/MC-Sync3AsInv.cfg View File

@ -0,0 +1,10 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
Sync3AsInv
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 10
- 0
rust-spec/fastsync/verification/MC-SyncFromCorrectInv.cfg View File

@ -0,0 +1,10 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
SyncFromCorrectInv
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 10
- 0
rust-spec/fastsync/verification/MC-Termination.cfg View File

@ -0,0 +1,10 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* PROPERTY definition
PROPERTY
Termination
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 10
- 0
rust-spec/fastsync/verification/MC-TerminationByTO.cfg View File

@ -0,0 +1,10 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* PROPERTY definition
PROPERTY
TerminationByTO
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 10
- 0
rust-spec/fastsync/verification/MC-TerminationCorr.cfg View File

@ -0,0 +1,10 @@
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* PROPERTY definition
PROPERTY
TerminationCorr
\* Generated on Wed May 27 18:45:18 CEST 2020

+ 17
- 0
rust-spec/fastsync/verification/MC_1_0_4.tla View File

@ -0,0 +1,17 @@
--------------------------- MODULE MC_1_0_4 ------------------------------------
\*a <: b == a \* type annotation
VALIDATOR_SETS == {"vs1", "vs2"}
NIL_VS == "NilVS"
CORRECT == {"c1"}
FAULTY == {"f2"} \ {"f2"}
MAX_HEIGHT == 4
PEER_MAX_REQUESTS == 2
TARGET_PENDING == 3
VARIABLES
state, blockPool, peersState, chain, turn, inMsg, outMsg
INSTANCE fastsync_apalache
================================================================================

+ 15
- 0
rust-spec/fastsync/verification/MC_1_1_4.tla View File

@ -0,0 +1,15 @@
--------------------------- MODULE MC_1_1_4 ------------------------------------
VALIDATOR_SETS == {"vs1", "vs2"}
NIL_VS == "NilVS"
CORRECT == {"c1"}
FAULTY == {"f2"}
MAX_HEIGHT == 4
PEER_MAX_REQUESTS == 2
TARGET_PENDING == 3
VARIABLES
state, blockPool, peersState, chain, turn, inMsg, outMsg
INSTANCE fastsync_apalache
================================================================================

+ 15
- 0
rust-spec/fastsync/verification/MC_1_2_4.tla View File

@ -0,0 +1,15 @@
-------------------------- MODULE MC_1_2_4 ------------------------------------
VALIDATOR_SETS == {"vs1", "vs2"}
NIL_VS == "NilVS"
CORRECT == {"c1"}
FAULTY == {"f2", "f3"}
MAX_HEIGHT == 4
PEER_MAX_REQUESTS == 2
TARGET_PENDING == 3
VARIABLES
state, blockPool, peersState, chain, turn, inMsg, outMsg
INSTANCE fastsync_apalache
===============================================================================

+ 15
- 0
rust-spec/fastsync/verification/MC_1_2_5.tla View File

@ -0,0 +1,15 @@
-------------------------- MODULE MC_1_2_5 ------------------------------------
VALIDATOR_SETS == {"vs1", "vs2"}
NIL_VS == "NilVS"
CORRECT == {"c1"}
FAULTY == {"f2", "f3"}
MAX_HEIGHT == 5
PEER_MAX_REQUESTS == 2
TARGET_PENDING == 3
VARIABLES
state, blockPool, peersState, chain, turn, inMsg, outMsg
INSTANCE fastsync_apalache
===============================================================================

+ 15
- 0
rust-spec/fastsync/verification/MC_1_3_5.tla View File

@ -0,0 +1,15 @@
------------------------------- MODULE MC_1_3_5 ------------------------------------
MAX_HEIGHT == 5
VALIDATOR_SETS == {"vs1", "vs2"}
NIL_VS == "NilVS"
CORRECT == {"c1"}
FAULTY == {"f2", "f3", "f4"}
PEER_MAX_REQUESTS == 2
TARGET_PENDING == 3
VARIABLES
state, blockPool, peersState, chain, turn, inMsg, outMsg
INSTANCE fastsync_apalache
========================================================================================

+ 17
- 0
rust-spec/fastsync/verification/MC_2_0_4.tla View File

@ -0,0 +1,17 @@
--------------------------- MODULE MC_2_0_4 ------------------------------------
a <: b == a \* type annotation
VALIDATOR_SETS == {"vs1", "vs2"}
NIL_VS == "NilVS"
CORRECT == {"c1", "c2"}
FAULTY == {} <: {STRING}
MAX_HEIGHT == 4
PEER_MAX_REQUESTS == 2
TARGET_PENDING == 3
VARIABLES
state, blockPool, peersState, chain, turn, inMsg, outMsg
INSTANCE fastsync_apalache
================================================================================

+ 110
- 0
rust-spec/fastsync/verification/Tinychain.tla View File

@ -0,0 +1,110 @@
-------------------------- MODULE Tinychain ----------------------------------
(* A very abstract model of Tendermint blockchain. Its only purpose is to highlight
the relation between validator sets, next validator sets, and last commits.
*)
EXTENDS Integers
\* type annotation
a <: b == a
\* the type of validator sets, e.g., STRING
VST == STRING
\* LastCommit type.
\* It contains:
\* 1) the flag of whether the block id equals to the hash of the previous block, and
\* 2) the set of the validators who have committed the block.
\* In the implementation, blockId is the hash of the previous block, which cannot be forged.
\* We abstract block id into whether it equals to the hash of the previous block or not.
LCT == [blockIdEqRef |-> BOOLEAN, committers |-> VST]
\* Block type.
\* A block contains its height, validator set, next validator set, and last commit.
\* Moreover, it contains the flag that tells us whether the block equals to the one
\* on the reference chain (this is an abstraction of hash).
BT == [height |-> Int, hashEqRef |-> BOOLEAN, wellFormed |-> BOOLEAN,
VS |-> VST, NextVS |-> VST, lastCommit |-> LCT]
CONSTANTS
(*
A set of abstract values, each value representing a set of validators.
For the purposes of this specification, they can be any values,
e.g., "s1", "s2", etc.
*)
VALIDATOR_SETS,
(* a nil validator set that is outside of VALIDATOR_SETS *)
NIL_VS,
(* The maximal height, up to which the blockchain may grow *)
MAX_HEIGHT
Heights == 1..MAX_HEIGHT
\* the set of all possible commits
Commits == [blockIdEqRef: BOOLEAN, committers: VALIDATOR_SETS]
\* the set of all possible blocks, not necessarily valid ones
Blocks ==
[height: Heights, hashEqRef: BOOLEAN, wellFormed: BOOLEAN,
VS: VALIDATOR_SETS, NextVS: VALIDATOR_SETS, lastCommit: Commits]
\* Does the chain contain a sound sequence of blocks that could be produced by
\* a 2/3 of faulty validators. This operator can be used to initialise the chain!
IsCorrectChain(chain) ==
\* restrict the structure of the blocks, to decrease the TLC search space
LET OkCommits == [blockIdEqRef: {TRUE}, committers: VALIDATOR_SETS]
OkBlocks == [height: Heights, hashEqRef: {TRUE}, wellFormed: {TRUE},
VS: VALIDATOR_SETS, NextVS: VALIDATOR_SETS, lastCommit: OkCommits]
IN
/\ chain \in [1..MAX_HEIGHT -> OkBlocks]
/\ \A h \in 1..MAX_HEIGHT:
LET b == chain[h] IN
/\ b.height = h \* the height is correct
/\ h > 1 =>
LET p == chain[h - 1] IN
/\ b.VS = p.NextVS \* the validators propagate from the previous block
/\ b.lastCommit.committers = p.VS \* and they are the committers
\* The basic properties of blocks on the blockchain:
\* They should pass the validity check and they may verify the next block.
\* Does the block pass the consistency check against the next validators of the previous block
IsMatchingValidators(block, nextVS) ==
\* simply check that the validator set is propagated correctly.
\* (the implementation tests hashes and the application state)
block.VS = nextVS
\* Does the block verify the commit (of the next block)
PossibleCommit(block, commit) ==
\* the commits are signed by the block validators
/\ commit.committers = block.VS
\* The block id in the commit matches the block hash (abstract comparison).
\* (The implementation has extensive tests for that.)
\* this is an abstraction of: commit.blockId = hash(block)
\*
\* These are possible scenarios on the concrete hashes:
\*
\* scenario 1: commit.blockId = 10 /\ hash(block) = 10 /\ hash(ref) = 10
\* scenario 2: commit.blockId = 20 /\ hash(block) = 20 /\ block.VS /= ref.VS
\* scenario 3: commit.blockId = 50 /\ hash(block) = 100
\* scenario 4: commit.blockId = 10 /\ hash(block) = 100
\* scenario 5: commit.blockId = 100 /\ hash(block) = 10
/\ commit.blockIdEqRef = block.hashEqRef
\* the following test would be cheating, as we do not have access to the
\* reference chain:
\* /\ commit.blockIdEqRef
\* Basic invariants
\* every block has the validator set that is chosen by its predecessor
ValidBlockInv(chain) ==
\A h \in 2..MAX_HEIGHT:
IsMatchingValidators(chain[h], chain[h - 1].NextVS)
\* last commit of every block is signed by the validators of the predecessor
VerifiedBlockInv(chain) ==
\A h \in 2..MAX_HEIGHT:
PossibleCommit(chain[h - 1], chain[h].lastCommit)
==================================================================================

+ 822
- 0
rust-spec/fastsync/verification/fastsync_apalache.tla View File

@ -0,0 +1,822 @@
----------------------------- MODULE fastsync_apalache -----------------------------
(*
In this document we give the high level specification of the fast sync
protocol as implemented here:
https://github.com/tendermint/tendermint/tree/master/blockchain/v2.
We assume a system in which one node is trying to sync with the blockchain
(replicated state machine) by downloading blocks from the set of full nodes
(we call them peers) that are block providers, and executing transactions
(part of the block) against the application.
Peers can be faulty, and we don't make any assumption about the rate of correct/faulty
nodes in the node peerset (i.e., they can all be faulty). Correct peers are part
of the replicated state machine, i.e., they manage blockchain and execute
transactions against the same deterministic application. We don't make any
assumptions about the behavior of faulty processes. Processes (client and peers)
communicate by message passing.
In this specification, we model this system with two parties:
- the node (state machine) that is doing fastsync and
- the environment with which node interacts.
The environment consists of the set of (correct and faulty) peers with
which node interacts as part of fast sync protocol, but also contains some
aspects (adding/removing peers, timeout mechanism) that are part of the node
local environment (could be seen as part of the runtime in which node
executes).
As part of the fast sync protocol a node and the peers exchange the following messages:
- StatusRequest
- StatusResponse
- BlockRequest
- BlockResponse
- NoBlockResponse.
A node is periodically issuing StatusRequests to query peers for their current height (to decide what
blocks to ask from what peers). Based on StatusResponses (that are sent by peers), the node queries
blocks for some height(s) by sending peers BlockRequest messages. A peer provides a requested block by
BlockResponse message. If a peer does not want to provide a requested block, then it sends NoBlockResponse message.
In addition to those messages, a node in this spec receives additional input messages (events):
- AddPeer
- RemovePeer
- SyncTimeout.
These are the control messages that are provided to the node by its execution enviornment. AddPeer
is for the case when a connection is established with a peer; similarly RemovePeer is for the case
a connection with the peer is terminated. Finally SyncTimeout is used to model a timeout trigger.
We assume that fast sync protocol starts when connections with some number of peers
are established. Therefore, peer set is initialised with non-empty set of peer ids. Note however
that node does not know initially the peer heights.
*)
EXTENDS Integers, FiniteSets, Sequences
CONSTANTS MAX_HEIGHT, \* the maximal height of blockchain
VALIDATOR_SETS, \* abstract set of validators
NIL_VS, \* a nil validator set
CORRECT, \* set of correct peers
FAULTY, \* set of faulty peers
TARGET_PENDING, \* maximum number of pending requests + downloaded blocks that are not yet processed
PEER_MAX_REQUESTS \* maximum number of pending requests per peer
ASSUME CORRECT \intersect FAULTY = {}
ASSUME TARGET_PENDING > 0
ASSUME PEER_MAX_REQUESTS > 0
\* the blockchain, see Tinychain
VARIABLE chain
\* introduce tiny chain as the source of blocks for the correct nodes
INSTANCE Tinychain
\* a special value for an undefined height
NilHeight == 0
\* the height of the genesis block
TrustedHeight == 1
\* the set of all peer ids the node can receive a message from
AllPeerIds == CORRECT \union FAULTY
\* Correct last commit have enough voting power, i.e., +2/3 of the voting power of
\* the corresponding validator set (enoughVotingPower = TRUE) that signs blockId.
\* BlockId defines correct previous block (in the implementation it is the hash of the block).
\* Instead of blockId, we encode blockIdEqRef, which is true, if the block id is equal
\* to the hash of the previous block, see Tinychain.
CorrectLastCommit(h) == chain[h].lastCommit
NilCommit == [blockIdEqRef |-> FALSE, committers |-> NIL_VS]
\* correct node always supplies the blocks from the blockchain
CorrectBlock(h) == chain[h]
NilBlock ==
[height |-> 0, hashEqRef |-> FALSE, wellFormed |-> FALSE,
lastCommit |-> NilCommit, VS |-> NIL_VS, NextVS |-> NIL_VS]
\* a special value for an undefined peer
NilPeer == "Nil" \* STRING for apalache efficiency
\* control the state of the syncing node
States == { "running", "finished"}
NoMsg == [type |-> "None"]
\* the variables of the node running fastsync
VARIABLES
state, \* running or finished
(*
blockPool [
height, \* current height we are trying to sync. Last block executed is height - 1
peerIds, \* set of peers node is connected to
peerHeights, \* map of peer ids to its (stated) height
blockStore, \* map of heights to (received) blocks
receivedBlocks, \* map of heights to peer that has sent us the block (stored in blockStore)
pendingBlocks, \* map of heights to peer to which block request has been sent
syncHeight, \* height at the point syncTimeout was triggered last time
syncedBlocks \* number of blocks synced since last syncTimeout. If it is 0 when the next timeout occurs, then protocol terminates.
]
*)
blockPool
\* the variables of the peers providing blocks
VARIABLES
(*
peersState [
peerHeights, \* track peer heights
statusRequested, \* boolean set to true when StatusRequest is received. Models periodic sending of StatusRequests.
blocksRequested \* set of BlockRequests received that are not answered yet
]
*)
peersState
\* the variables for the network and scheduler
VARIABLES
turn, \* who is taking the turn: "Peers" or "Node"
inMsg, \* a node receives message by this variable
outMsg \* a node sends a message by this variable
(* the variables of the node *)
nvars == <<state, blockPool>>
(*************** Type definitions for Apalache (model checker) **********************)
AsIntSet(S) == S <: {Int}
\* type of process ids
PIDT == STRING
AsPidSet(S) == S <: {PIDT}
\* ControlMessage type
CMT == [type |-> STRING, peerId |-> PIDT] \* type of control messages
\* InMsg type
IMT == [type |-> STRING, peerId |-> PIDT, height |-> Int, block |-> BT]
AsInMsg(m) == m <: IMT
AsInMsgSet(S) == S <: {IMT}
\* OutMsg type
OMT == [type |-> STRING, peerId |-> PIDT, height |-> Int]
AsOutMsg(m) == m <: OMT
AsOutMsgSet(S) == S <: {OMT}
\* block pool type
BPT == [height |-> Int, peerIds |-> {PIDT}, peerHeights |-> [PIDT -> Int],
blockStore |-> [Int -> BT], receivedBlocks |-> [Int -> PIDT],
pendingBlocks |-> [Int -> PIDT], syncedBlocks |-> Int, syncHeight |-> Int]
AsBlockPool(bp) == bp <: BPT
(******************** Sets of messages ********************************)
\* Control messages
ControlMsgs ==
AsInMsgSet([type: {"addPeer"}, peerId: AllPeerIds])
\union
AsInMsgSet([type: {"removePeer"}, peerId: AllPeerIds])
\union
AsInMsgSet([type: {"syncTimeout"}])
\* All messages (and events) received by a node
InMsgs ==
AsInMsgSet({NoMsg})
\union
AsInMsgSet([type: {"blockResponse"}, peerId: AllPeerIds, block: Blocks])
\union
AsInMsgSet([type: {"noBlockResponse"}, peerId: AllPeerIds, height: Heights])
\union
AsInMsgSet([type: {"statusResponse"}, peerId: AllPeerIds, height: Heights])
\union
ControlMsgs
\* Messages sent by a node and received by peers (environment in our case)
OutMsgs ==
AsOutMsgSet({NoMsg})
\union
AsOutMsgSet([type: {"statusRequest"}]) \* StatusRequest is broadcast to the set of connected peers.
\union
AsOutMsgSet([type: {"blockRequest"}, peerId: AllPeerIds, height: Heights])
(********************************** NODE ***********************************)
InitNode ==
\E pIds \in SUBSET AllPeerIds: \* set of peers node established initial connections with
/\ pIds \subseteq CORRECT
/\ pIds /= AsPidSet({}) \* apalache better checks non-emptiness than subtracts from SUBSET
/\ blockPool = AsBlockPool([
height |-> TrustedHeight + 1, \* the genesis block is at height 1
syncHeight |-> TrustedHeight + 1, \* and we are synchronized to it
peerIds |-> pIds,
peerHeights |-> [p \in AllPeerIds |-> NilHeight], \* no peer height is known
blockStore |->
[h \in Heights |->
IF h > TrustedHeight THEN NilBlock ELSE chain[1]],
receivedBlocks |-> [h \in Heights |-> NilPeer],
pendingBlocks |-> [h \in Heights |-> NilPeer],
syncedBlocks |-> -1
])
/\ state = "running"
\* Remove faulty peers.
\* Returns new block pool.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L222
RemovePeers(rmPeers, bPool) ==
LET keepPeers == bPool.peerIds \ rmPeers IN
LET pHeights ==
[p \in AllPeerIds |-> IF p \in rmPeers THEN NilHeight ELSE bPool.peerHeights[p]] IN
LET failedRequests ==
{h \in Heights: /\ h >= bPool.height
/\ \/ bPool.pendingBlocks[h] \in rmPeers
\/ bPool.receivedBlocks[h] \in rmPeers} IN
LET pBlocks ==
[h \in Heights |-> IF h \in failedRequests THEN NilPeer ELSE bPool.pendingBlocks[h]] IN
LET rBlocks ==
[h \in Heights |-> IF h \in failedRequests THEN NilPeer ELSE bPool.receivedBlocks[h]] IN
LET bStore ==
[h \in Heights |-> IF h \in failedRequests THEN NilBlock ELSE bPool.blockStore[h]] IN
IF keepPeers /= bPool.peerIds
THEN [bPool EXCEPT
!.peerIds = keepPeers,
!.peerHeights = pHeights,
!.pendingBlocks = pBlocks,
!.receivedBlocks = rBlocks,
!.blockStore = bStore
]
ELSE bPool
\* Add a peer.
\* see https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L198
AddPeer(peer, bPool) ==
[bPool EXCEPT !.peerIds = bPool.peerIds \union {peer}]
(*
Handle StatusResponse message.
If valid status response, update peerHeights.
If invalid (height is smaller than the current), then remove peer.
Returns new block pool.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L667
*)
HandleStatusResponse(msg, bPool) ==
LET peerHeight == bPool.peerHeights[msg.peerId] IN
IF /\ msg.peerId \in bPool.peerIds
/\ msg.height >= peerHeight
THEN \* a correct response
LET pHeights == [bPool.peerHeights EXCEPT ![msg.peerId] = msg.height] IN
[bPool EXCEPT !.peerHeights = pHeights]
ELSE RemovePeers({msg.peerId}, bPool) \* the peer has sent us message with smaller height or peer is not in our peer list
(*
Handle BlockResponse message.
If valid block response, update blockStore, pendingBlocks and receivedBlocks.
If invalid (unsolicited response or malformed block), then remove peer.
Returns new block pool.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L522
*)
HandleBlockResponse(msg, bPool) ==
LET h == msg.block.height IN
IF /\ msg.peerId \in bPool.peerIds
/\ bPool.blockStore[h] = NilBlock
/\ bPool.pendingBlocks[h] = msg.peerId
/\ msg.block.wellFormed
THEN
[bPool EXCEPT
!.blockStore = [bPool.blockStore EXCEPT ![h] = msg.block],
!.receivedBlocks = [bPool.receivedBlocks EXCEPT![h] = msg.peerId],
!.pendingBlocks = [bPool.pendingBlocks EXCEPT![h] = NilPeer]
]
ELSE RemovePeers({msg.peerId}, bPool)
HandleNoBlockResponse(msg, bPool) ==
RemovePeers({msg.peerId}, bPool)
\* Compute max peer height.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L440
MaxPeerHeight(bPool) ==
IF bPool.peerIds = AsPidSet({})
THEN 0 \* no peers, just return 0
ELSE LET Hts == {bPool.peerHeights[p] : p \in bPool.peerIds} IN
CHOOSE max \in Hts: \A h \in Hts: h <= max
(* Returns next height for which request should be sent.
Returns NilHeight in case there is no height for which request can be sent.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L454 *)
FindNextRequestHeight(bPool) ==
LET S == {i \in Heights:
/\ i >= bPool.height
/\ i <= MaxPeerHeight(bPool)
/\ bPool.blockStore[i] = NilBlock
/\ bPool.pendingBlocks[i] = NilPeer} IN
IF S = AsIntSet({})
THEN NilHeight
ELSE
CHOOSE min \in S: \A h \in S: h >= min
\* Returns number of pending requests for a given peer.
NumOfPendingRequests(bPool, peer) ==
LET peerPendingRequests ==
{h \in Heights:
/\ h >= bPool.height
/\ bPool.pendingBlocks[h] = peer
}
IN
Cardinality(peerPendingRequests)
(* Returns peer that can serve block for a given height.
Returns NilPeer in case there are no such peer.
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L477 *)
FindPeerToServe(bPool, h) ==
LET peersThatCanServe == { p \in bPool.peerIds:
/\ bPool.peerHeights[p] >= h
/\ NumOfPendingRequests(bPool, p) < PEER_MAX_REQUESTS } IN
LET pendingBlocks ==
{i \in Heights:
/\ i >= bPool.height
/\ \/ bPool.pendingBlocks[i] /= NilPeer
\/ bPool.blockStore[i] /= NilBlock
} IN
IF \/ peersThatCanServe = AsPidSet({})
\/ Cardinality(pendingBlocks) >= TARGET_PENDING
THEN NilPeer
\* pick a peer that can serve request for height h that has minimum number of pending requests
ELSE CHOOSE p \in peersThatCanServe: \A q \in peersThatCanServe:
/\ NumOfPendingRequests(bPool, p) <= NumOfPendingRequests(bPool, q)
\* Make a request for a block (if possible) and return a request message and block poool.
CreateRequest(bPool) ==
LET nextHeight == FindNextRequestHeight(bPool) IN
IF nextHeight = NilHeight THEN [msg |-> AsOutMsg(NoMsg), pool |-> bPool]
ELSE
LET peer == FindPeerToServe(bPool, nextHeight) IN
IF peer = NilPeer THEN [msg |-> AsOutMsg(NoMsg), pool |-> bPool]
ELSE
LET m == [type |-> "blockRequest", peerId |-> peer, height |-> nextHeight] IN
LET newPool == [bPool EXCEPT
!.pendingBlocks = [bPool.pendingBlocks EXCEPT ![nextHeight] = peer]
] IN
[msg |-> m, pool |-> newPool]
\* Returns node state, i.e., defines termination condition.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L432
ComputeNextState(bPool) ==
IF bPool.syncedBlocks = 0 \* corresponds to the syncTimeout in case no progress has been made for a period of time.
THEN "finished"
ELSE IF /\ bPool.height > 1
/\ bPool.height >= MaxPeerHeight(bPool) \* see https://github.com/tendermint/tendermint/blob/61057a8b0af2beadee106e47c4616b279e83c920/blockchain/v2/scheduler.go#L566
THEN "finished"
ELSE "running"
(* Verify if commit is for the given block id and if commit has enough voting power.
See https://github.com/tendermint/tendermint/blob/61057a8b0af2beadee106e47c4616b279e83c920/blockchain/v2/processor_context.go#L12 *)
VerifyCommit(block, lastCommit) ==
PossibleCommit(block, lastCommit)
(* Tries to execute next block in the pool, i.e., defines block validation logic.
Returns new block pool (peers that has send invalid blocks are removed).
See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/processor.go#L135 *)
ExecuteBlocks(bPool) ==
LET bStore == bPool.blockStore IN
LET block0 == bStore[bPool.height - 1] IN
\* blockPool is initialized with height = TrustedHeight + 1,
\* so bStore[bPool.height - 1] is well defined
LET block1 == bStore[bPool.height] IN
LET block2 == bStore[bPool.height + 1] IN
IF block1 = NilBlock \/ block2 = NilBlock
THEN bPool \* we don't have two next consecutive blocks
ELSE IF ~IsMatchingValidators(block1, block0.NextVS)
\* Check that block1.VS = block0.Next.
\* Otherwise, CorrectBlocksInv fails.
\* In the implementation NextVS is part of the application state,
\* so a mismatch can be found without access to block0.NextVS.
THEN \* the block does not have the expected validator set
RemovePeers({bPool.receivedBlocks[bPool.height]}, bPool)
ELSE IF ~VerifyCommit(block1, block2.lastCommit)
\* Verify commit of block2 based on block1.
\* Interestingly, we do not have to call IsMatchingValidators.
THEN \* remove the peers of block1 and block2, as they are considered faulty
RemovePeers({bPool.receivedBlocks[bPool.height],
bPool.receivedBlocks[bPool.height + 1]},
bPool)
ELSE \* all good, execute block at position height
[bPool EXCEPT !.height = bPool.height + 1]
\* Defines logic for pruning peers.
\* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L613
TryPrunePeer(bPool, suspectedSet, isTimedOut) ==
(* -----------------------------------------------------------------------------------------------------------------------*)
(* Corresponds to function prunablePeers in scheduler.go file. Note that this function only checks if block has been *)
(* received from a peer during peerTimeout period. *)
(* Note that in case no request has been scheduled to a correct peer, or a request has been scheduled *)
(* recently, so the peer hasn't responded yet, a peer will be removed as no block is received within peerTimeout. *)
(* In case of faulty peers, we don't have any guarantee that they will respond. *)
(* Therefore, we model this with nondeterministic behavior as it could lead to peer removal, for both correct and faulty. *)
(* See scheduler.go *)
(* https://github.com/tendermint/tendermint/blob/4298bbcc4e25be78e3c4f21979d6aa01aede6e87/blockchain/v2/scheduler.go#L335 *)
LET toRemovePeers == bPool.peerIds \intersect suspectedSet IN
(*
Corresponds to logic for pruning a peer that is responsible for delivering block for the next height.
The pruning logic for the next height is based on the time when a BlockRequest is sent. Therefore, if a request is sent
to a correct peer for the next height (blockPool.height), it should never be removed by this check as we assume that
correct peers respond timely and reliably. However, if a request is sent to a faulty peer then we
might get response on time or not, which is modelled with nondeterministic isTimedOut flag.
See scheduler.go
https://github.com/tendermint/tendermint/blob/4298bbcc4e25be78e3c4f21979d6aa01aede6e87/blockchain/v2/scheduler.go#L617
*)
LET nextHeightPeer == bPool.pendingBlocks[bPool.height] IN
LET prunablePeers ==
IF /\ nextHeightPeer /= NilPeer
/\ nextHeightPeer \in FAULTY
/\ isTimedOut
THEN toRemovePeers \union {nextHeightPeer}
ELSE toRemovePeers
IN
RemovePeers(prunablePeers, bPool)
\* Handle SyncTimeout. It models if progress has been made (height has increased) since the last SyncTimeout event.
HandleSyncTimeout(bPool) ==
[bPool EXCEPT
!.syncedBlocks = bPool.height - bPool.syncHeight,
!.syncHeight = bPool.height
]
HandleResponse(msg, bPool) ==
IF msg.type = "blockResponse" THEN
HandleBlockResponse(msg, bPool)
ELSE IF msg.type = "noBlockResponse" THEN
HandleNoBlockResponse(msg, bPool)
ELSE IF msg.type = "statusResponse" THEN
HandleStatusResponse(msg, bPool)
ELSE IF msg.type = "addPeer" THEN
AddPeer(msg.peerId, bPool)
ELSE IF msg.type = "removePeer" THEN
RemovePeers({msg.peerId}, bPool)
ELSE IF msg.type = "syncTimeout" THEN
HandleSyncTimeout(bPool)
ELSE
bPool
(*
At every node step we executed the following steps (atomically):
1) input message is consumed and the corresponding handler is called,
2) pruning logic is called
3) block execution is triggered (we try to execute block at next height)
4) a request to a peer is made (if possible) and
5) we decide if termination condition is satisifed so we stop.
*)
NodeStep ==
\E suspectedSet \in SUBSET AllPeerIds: \* suspectedSet is a nondeterministic set of peers
\E isTimedOut \in BOOLEAN:
LET bPool == HandleResponse(inMsg, blockPool) IN
LET bp == TryPrunePeer(bPool, suspectedSet, isTimedOut) IN
LET nbPool == ExecuteBlocks(bp) IN
LET msgAndPool == CreateRequest(nbPool) IN
LET nstate == ComputeNextState(msgAndPool.pool) IN
/\ state' = nstate
/\ blockPool' = msgAndPool.pool
/\ outMsg' = msgAndPool.msg
/\ inMsg' = AsInMsg(NoMsg)
\* If node is running, then in every step we try to create blockRequest.
\* In addition, input message (if exists) is consumed and processed.
NextNode ==
\/ /\ state = "running"
/\ NodeStep
\/ /\ state = "finished"
/\ UNCHANGED <<nvars, inMsg, outMsg>>
(********************************** Peers ***********************************)
InitPeers ==
\E pHeights \in [AllPeerIds -> Heights]:
peersState = [
peerHeights |-> pHeights,
statusRequested |-> FALSE,
blocksRequested |-> AsOutMsgSet({})
]
HandleStatusRequest(msg, pState) ==
[pState EXCEPT
!.statusRequested = TRUE
]
HandleBlockRequest(msg, pState) ==
[pState EXCEPT
!.blocksRequested = pState.blocksRequested \union AsOutMsgSet({msg})
]
HandleRequest(msg, pState) ==
IF msg = AsOutMsg(NoMsg)
THEN pState
ELSE IF msg.type = "statusRequest"
THEN HandleStatusRequest(msg, pState)
ELSE HandleBlockRequest(msg, pState)
CreateStatusResponse(peer, pState, anyHeight) ==
LET m ==
IF peer \in CORRECT
THEN AsInMsg([type |-> "statusResponse", peerId |-> peer, height |-> pState.peerHeights[peer]])
ELSE AsInMsg([type |-> "statusResponse", peerId |-> peer, height |-> anyHeight]) IN
[msg |-> m, peers |-> pState]
CreateBlockResponse(msg, pState, arbitraryBlock) ==
LET m ==
IF msg.peerId \in CORRECT
THEN AsInMsg([type |-> "blockResponse", peerId |-> msg.peerId, block |-> CorrectBlock(msg.height)])
ELSE AsInMsg([type |-> "blockResponse", peerId |-> msg.peerId, block |-> arbitraryBlock]) IN
LET npState ==
[pState EXCEPT
!.blocksRequested = pState.blocksRequested \ {msg}
] IN
[msg |-> m, peers |-> npState]
GrowPeerHeight(pState) ==
\E p \in CORRECT:
/\ pState.peerHeights[p] < MAX_HEIGHT
/\ peersState' = [pState EXCEPT !.peerHeights[p] = @ + 1]
/\ inMsg' = AsInMsg(NoMsg)
SendStatusResponseMessage(pState) ==
/\ \E arbitraryHeight \in Heights:
\E peer \in AllPeerIds:
LET msgAndPeers == CreateStatusResponse(peer, pState, arbitraryHeight) IN
/\ peersState' = msgAndPeers.peers
/\ inMsg' = msgAndPeers.msg
SendAddPeerMessage ==
\E peer \in AllPeerIds:
inMsg' = AsInMsg([type |-> "addPeer", peerId |-> peer])
SendRemovePeerMessage ==
\E peer \in AllPeerIds:
inMsg' = AsInMsg([type |-> "removePeer", peerId |-> peer])
SendSyncTimeoutMessage ==
inMsg' = AsInMsg([type |-> "syncTimeout"])
SendControlMessage ==
\/ SendAddPeerMessage
\/ SendRemovePeerMessage
\/ SendSyncTimeoutMessage
\* An extremely important property of block hashes (blockId):
\* If a commit is correct, then the previous block in the block store must be also correct.
UnforgeableBlockId(height, block) ==
block.hashEqRef => block = chain[height]
\* A faulty peer cannot forge enough of the validators signatures.
\* A more precise rule should have checked that the commiters have over 2/3 of the VS's voting power.
NoFork(height, block) ==
(height > 1 /\ block.lastCommit.committers = chain[height - 1].VS)
=> block.lastCommit.blockIdEqRef
\* Can be block produced by a faulty peer
IsBlockByFaulty(height, block) ==
/\ block.height = height
/\ UnforgeableBlockId(height, block)
/\ NoFork(height, block)
SendBlockResponseMessage(pState) ==
\* a response to a requested block: either by a correct, or by a faulty peer
\/ /\ pState.blocksRequested /= AsOutMsgSet({})
/\ \E msg \in pState.blocksRequested:
\E block \in Blocks:
/\ IsBlockByFaulty(msg.height, block)
/\ LET msgAndPeers == CreateBlockResponse(msg, pState, block) IN
/\ peersState' = msgAndPeers.peers
/\ inMsg' = msgAndPeers.msg
\* a faulty peer can always send an unsolicited block
\/ \E peerId \in FAULTY:
\E block \in Blocks:
/\ IsBlockByFaulty(block.height, block)
/\ peersState' = pState
/\ inMsg' = AsInMsg([type |-> "blockResponse",
peerId |-> peerId, block |-> block])
SendNoBlockResponseMessage(pState) ==
/\ peersState' = pState
/\ inMsg' \in AsInMsgSet([type: {"noBlockResponse"}, peerId: FAULTY, height: Heights])
SendResponseMessage(pState) ==
\/ SendBlockResponseMessage(pState)
\/ SendNoBlockResponseMessage(pState)
\/ SendStatusResponseMessage(pState)
NextEnvStep(pState) ==
\/ SendResponseMessage(pState)
\/ GrowPeerHeight(pState)
\/ SendControlMessage /\ peersState' = pState
\* note that we propagate pState that was missing in the previous version
\* Peers consume a message and update it's local state. It then makes a single step, i.e., it sends at most single message.
\* Message sent could be either a response to a request or faulty message (sent by faulty processes).
NextPeers ==
LET pState == HandleRequest(outMsg, peersState) IN
/\ outMsg' = AsOutMsg(NoMsg)
/\ NextEnvStep(pState)
\* the composition of the node, the peers, the network and scheduler
Init ==
/\ IsCorrectChain(chain) \* initialize the blockchain
/\ InitNode
/\ InitPeers
/\ turn = "Peers"
/\ inMsg = AsInMsg(NoMsg)
/\ outMsg = AsOutMsg([type |-> "statusRequest"])
Next ==
IF turn = "Peers"
THEN
/\ NextPeers
/\ turn' = "Node"
/\ UNCHANGED <<nvars, chain>>
ELSE
/\ NextNode
/\ turn' = "Peers"
/\ UNCHANGED <<peersState, chain>>
FlipTurn ==
turn' =
IF turn = "Peers" THEN
"Node"
ELSE
"Peers"
\* Compute max peer height. Used as a helper operator in properties.
MaxCorrectPeerHeight(bPool) ==
LET correctPeers == {p \in bPool.peerIds: p \in CORRECT} IN
IF correctPeers = AsPidSet({})
THEN 0 \* no peers, just return 0
ELSE LET Hts == {bPool.peerHeights[p] : p \in correctPeers} IN
CHOOSE max \in Hts: \A h \in Hts: h <= max
\* properties to check
TypeOK ==
/\ state \in States
/\ inMsg \in InMsgs
/\ outMsg \in OutMsgs
/\ turn \in {"Peers", "Node"}
/\ peersState \in [
peerHeights: [AllPeerIds -> Heights \union {NilHeight}],
statusRequested: BOOLEAN,
blocksRequested:
SUBSET
[type: {"blockRequest"}, peerId: AllPeerIds, height: Heights]
]
/\ blockPool \in [
height: Heights,
peerIds: SUBSET AllPeerIds,
peerHeights: [AllPeerIds -> Heights \union {NilHeight}],
blockStore: [Heights -> Blocks \union {NilBlock}],
receivedBlocks: [Heights -> AllPeerIds \union {NilPeer}],
pendingBlocks: [Heights -> AllPeerIds \union {NilPeer}],
syncedBlocks: Heights \union {NilHeight, -1},
syncHeight: Heights
]
(* Incorrect synchronization: The last block may be never received *)
Sync1 ==
[](state = "finished" =>
blockPool.height >= MaxCorrectPeerHeight(blockPool))
Sync1AsInv ==
state = "finished" => blockPool.height >= MaxCorrectPeerHeight(blockPool)
(* Incorrect synchronization, as there may be a timeout *)
Sync2 ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ [] (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
Sync2AsInv ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
(* Correct synchronization *)
Sync3 ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ blockPool.syncedBlocks <= 0 \* timeout
\/ [] (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
Sync3AsInv ==
\A p \in CORRECT:
\/ p \notin blockPool.peerIds
\/ blockPool.syncedBlocks <= 0 \* timeout
\/ (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
(* Naive termination *)
\* This property is violated, as the faulty peers may produce infinitely many responses
Termination ==
WF_turn(FlipTurn) => <>(state = "finished")
(* Termination by timeout: the protocol terminates, if there is a timeout *)
\* the precondition: fair flip turn and eventual timeout when no new blocks were synchronized
TerminationByTOPre ==
/\ WF_turn(FlipTurn)
/\ <>(inMsg.type = "syncTimeout" /\ blockPool.height <= blockPool.syncHeight)
TerminationByTO ==
TerminationByTOPre => <>(state = "finished")
(* The termination property when we only have correct peers *)
\* as correct peers may spam the node with addPeer, removePeer, and statusResponse,
\* we have to enforce eventual response (there are no queues in our spec)
CorrBlockResponse ==
\A h \in Heights:
[](outMsg.type = "blockRequest" /\ outMsg.height = h
=> <>(inMsg.type = "blockResponse" /\ inMsg.block.height = h))
\* a precondition for termination in presence of only correct processes
TerminationCorrPre ==
/\ FAULTY = AsPidSet({})
/\ WF_turn(FlipTurn)
/\ CorrBlockResponse
\* termination when there are only correct processes
TerminationCorr ==
TerminationCorrPre => <>(state = "finished")
\* All synchronized blocks (but the last one) are exactly like in the reference chain
CorrectBlocksInv ==
\/ state /= "finished"
\/ \A h \in 1..(blockPool.height - 1):
blockPool.blockStore[h] = chain[h]
\* A false expectation that the protocol only finishes with the blocks
\* from the processes that had not been suspected in being faulty
SyncFromCorrectInv ==
\/ state /= "finished"
\/ \A h \in 1..blockPool.height:
blockPool.receivedBlocks[h] \in blockPool.peerIds \union {NilPeer}
\* A false expectation that a correct process is never removed from the set of peer ids.
\* A correct process may reply too late and then gets evicted.
CorrectNeverSuspectedInv ==
CORRECT \subseteq blockPool.peerIds
BlockPoolInvariant ==
\A h \in Heights:
\* waiting for a block to arrive
\/ /\ blockPool.receivedBlocks[h] = NilPeer
/\ blockPool.blockStore[h] = NilBlock
\* valid block is received and is present in the store
\/ /\ blockPool.receivedBlocks[h] /= NilPeer
/\ blockPool.blockStore[h] /= NilBlock
/\ blockPool.pendingBlocks[h] = NilPeer
(* a few simple properties that trigger counterexamples *)
\* Shows execution in which peer set is empty
PeerSetIsNeverEmpty == blockPool.peerIds /= AsPidSet({})
\* Shows execution in which state = "finished" and MaxPeerHeight is not equal to 1
StateNotFinished ==
state /= "finished" \/ MaxPeerHeight(blockPool) = 1
=============================================================================
\*=============================================================================
\* Modification History
\* Last modified Fri May 29 20:41:53 CEST 2020 by igor
\* Last modified Thu Apr 16 16:57:22 CEST 2020 by zarkomilosevic
\* Created Tue Feb 04 10:36:18 CET 2020 by zarkomilosevic

Loading…
Cancel
Save