You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

825 lines
33 KiB

  1. ----------------------------- MODULE fastsync -----------------------------
  2. (*
  3. In this document we give the high level specification of the fast sync
  4. protocol as implemented here:
  5. https://github.com/tendermint/tendermint/tree/master/blockchain/v2.
  6. We assume a system in which one node is trying to sync with the blockchain
  7. (replicated state machine) by downloading blocks from the set of full nodes
  8. (we call them peers) that are block providers, and executing transactions
  9. (part of the block) against the application.
  10. Peers can be faulty, and we don't make any assumption about the rate of correct/faulty
  11. nodes in the node peerset (i.e., they can all be faulty). Correct peers are part
  12. of the replicated state machine, i.e., they manage blockchain and execute
  13. transactions against the same deterministic application. We don't make any
  14. assumptions about the behavior of faulty processes. Processes (client and peers)
  15. communicate by message passing.
  16. In this specification, we model this system with two parties:
  17. - the node (state machine) that is doing fastsync and
  18. - the environment with which node interacts.
  19. The environment consists of the set of (correct and faulty) peers with
  20. which node interacts as part of fast sync protocol, but also contains some
  21. aspects (adding/removing peers, timeout mechanism) that are part of the node
  22. local environment (could be seen as part of the runtime in which node
  23. executes).
  24. As part of the fast sync protocol a node and the peers exchange the following messages:
  25. - StatusRequest
  26. - StatusResponse
  27. - BlockRequest
  28. - BlockResponse
  29. - NoBlockResponse.
  30. A node is periodically issuing StatusRequests to query peers for their current height (to decide what
  31. blocks to ask from what peers). Based on StatusResponses (that are sent by peers), the node queries
  32. blocks for some height(s) by sending peers BlockRequest messages. A peer provides a requested block by
  33. BlockResponse message. If a peer does not want to provide a requested block, then it sends NoBlockResponse message.
  34. In addition to those messages, a node in this spec receives additional input messages (events):
  35. - AddPeer
  36. - RemovePeer
  37. - SyncTimeout.
  38. These are the control messages that are provided to the node by its execution enviornment. AddPeer
  39. is for the case when a connection is established with a peer; similarly RemovePeer is for the case
  40. a connection with the peer is terminated. Finally SyncTimeout is used to model a timeout trigger.
  41. We assume that fast sync protocol starts when connections with some number of peers
  42. are established. Therefore, peer set is initialised with non-empty set of peer ids. Note however
  43. that node does not know initially the peer heights.
  44. *)
  45. EXTENDS Integers, FiniteSets, Sequences
  46. CONSTANTS MAX_HEIGHT, \* the maximal height of blockchain
  47. VALIDATOR_SETS, \* abstract set of validators
  48. NIL_VS, \* a nil validator set
  49. CORRECT, \* set of correct peers
  50. FAULTY, \* set of faulty peers
  51. TARGET_PENDING, \* maximum number of pending requests + downloaded blocks that are not yet processed
  52. PEER_MAX_REQUESTS \* maximum number of pending requests per peer
  53. ASSUME CORRECT \intersect FAULTY = {}
  54. ASSUME TARGET_PENDING > 0
  55. ASSUME PEER_MAX_REQUESTS > 0
  56. \* the blockchain, see Tinychain
  57. VARIABLE chain
  58. \* introduce tiny chain as the source of blocks for the correct nodes
  59. INSTANCE Tinychain
  60. \* a special value for an undefined height
  61. NilHeight == 0
  62. \* the height of the genesis block
  63. TrustedHeight == 1
  64. \* the set of all peer ids the node can receive a message from
  65. AllPeerIds == CORRECT \union FAULTY
  66. \* Correct last commit have enough voting power, i.e., +2/3 of the voting power of
  67. \* the corresponding validator set (enoughVotingPower = TRUE) that signs blockId.
  68. \* BlockId defines correct previous block (in the implementation it is the hash of the block).
  69. \* Instead of blockId, we encode blockIdEqRef, which is true, if the block id is equal
  70. \* to the hash of the previous block, see Tinychain.
  71. CorrectLastCommit(h) == chain[h].lastCommit
  72. NilCommit == [blockIdEqRef |-> FALSE, committers |-> NIL_VS]
  73. \* correct node always supplies the blocks from the blockchain
  74. CorrectBlock(h) == chain[h]
  75. NilBlock ==
  76. [height |-> 0, hashEqRef |-> FALSE, wellFormed |-> FALSE,
  77. lastCommit |-> NilCommit, VS |-> NIL_VS, NextVS |-> NIL_VS]
  78. \* a special value for an undefined peer
  79. NilPeer == "Nil" \* STRING for apalache efficiency
  80. \* control the state of the syncing node
  81. States == { "running", "finished"}
  82. NoMsg == [type |-> "None"]
  83. \* the variables of the node running fastsync
  84. VARIABLES
  85. state, \* running or finished
  86. (*
  87. blockPool [
  88. height, \* current height we are trying to sync. Last block executed is height - 1
  89. peerIds, \* set of peers node is connected to
  90. peerHeights, \* map of peer ids to its (stated) height
  91. blockStore, \* map of heights to (received) blocks
  92. receivedBlocks, \* map of heights to peer that has sent us the block (stored in blockStore)
  93. pendingBlocks, \* map of heights to peer to which block request has been sent
  94. syncHeight, \* height at the point syncTimeout was triggered last time
  95. syncedBlocks \* number of blocks synced since last syncTimeout. If it is 0 when the next timeout occurs, then protocol terminates.
  96. ]
  97. *)
  98. blockPool
  99. \* the variables of the peers providing blocks
  100. VARIABLES
  101. (*
  102. peersState [
  103. peerHeights, \* track peer heights
  104. statusRequested, \* boolean set to true when StatusRequest is received. Models periodic sending of StatusRequests.
  105. blocksRequested \* set of BlockRequests received that are not answered yet
  106. ]
  107. *)
  108. peersState
  109. \* the variables for the network and scheduler
  110. VARIABLES
  111. turn, \* who is taking the turn: "Peers" or "Node"
  112. inMsg, \* a node receives message by this variable
  113. outMsg \* a node sends a message by this variable
  114. (* the variables of the node *)
  115. nvars == <<state, blockPool>>
  116. (*************** Type definitions for Apalache (model checker) **********************)
  117. AsIntSet(S) == S <: {Int}
  118. \* type of process ids
  119. PIDT == STRING
  120. AsPidSet(S) == S <: {PIDT}
  121. \* ControlMessage type
  122. CMT == [type |-> STRING, peerId |-> PIDT] \* type of control messages
  123. \* InMsg type
  124. IMT == [type |-> STRING, peerId |-> PIDT, height |-> Int, block |-> BT]
  125. AsInMsg(m) == m <: IMT
  126. AsInMsgSet(S) == S <: {IMT}
  127. \* OutMsg type
  128. OMT == [type |-> STRING, peerId |-> PIDT, height |-> Int]
  129. AsOutMsg(m) == m <: OMT
  130. AsOutMsgSet(S) == S <: {OMT}
  131. \* block pool type
  132. BPT == [height |-> Int, peerIds |-> {PIDT}, peerHeights |-> [PIDT -> Int],
  133. blockStore |-> [Int -> BT], receivedBlocks |-> [Int -> PIDT],
  134. pendingBlocks |-> [Int -> PIDT], syncedBlocks |-> Int, syncHeight |-> Int]
  135. AsBlockPool(bp) == bp <: BPT
  136. (******************** Sets of messages ********************************)
  137. \* Control messages
  138. ControlMsgs ==
  139. AsInMsgSet([type: {"addPeer"}, peerId: AllPeerIds])
  140. \union
  141. AsInMsgSet([type: {"removePeer"}, peerId: AllPeerIds])
  142. \union
  143. AsInMsgSet([type: {"syncTimeout"}])
  144. \* All messages (and events) received by a node
  145. InMsgs ==
  146. AsInMsgSet({NoMsg})
  147. \union
  148. AsInMsgSet([type: {"blockResponse"}, peerId: AllPeerIds, block: Blocks])
  149. \union
  150. AsInMsgSet([type: {"noBlockResponse"}, peerId: AllPeerIds, height: Heights])
  151. \union
  152. AsInMsgSet([type: {"statusResponse"}, peerId: AllPeerIds, height: Heights])
  153. \union
  154. ControlMsgs
  155. \* Messages sent by a node and received by peers (environment in our case)
  156. OutMsgs ==
  157. AsOutMsgSet({NoMsg})
  158. \union
  159. AsOutMsgSet([type: {"statusRequest"}]) \* StatusRequest is broadcast to the set of connected peers.
  160. \union
  161. AsOutMsgSet([type: {"blockRequest"}, peerId: AllPeerIds, height: Heights])
  162. (********************************** NODE ***********************************)
  163. InitNode ==
  164. \E pIds \in SUBSET AllPeerIds: \* set of peers node established initial connections with
  165. /\ pIds \subseteq CORRECT \* this line is not necessary
  166. /\ pIds /= AsPidSet({}) \* apalache better checks non-emptiness than subtracts from SUBSET
  167. /\ blockPool = AsBlockPool([
  168. height |-> TrustedHeight + 1, \* the genesis block is at height 1
  169. syncHeight |-> TrustedHeight + 1, \* and we are synchronized to it
  170. peerIds |-> pIds,
  171. peerHeights |-> [p \in AllPeerIds |-> NilHeight], \* no peer height is known
  172. blockStore |->
  173. [h \in Heights |->
  174. IF h > TrustedHeight THEN NilBlock ELSE chain[1]],
  175. receivedBlocks |-> [h \in Heights |-> NilPeer],
  176. pendingBlocks |-> [h \in Heights |-> NilPeer],
  177. syncedBlocks |-> -1
  178. ])
  179. /\ state = "running"
  180. \* Remove faulty peers.
  181. \* Returns new block pool.
  182. \* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L222
  183. RemovePeers(rmPeers, bPool) ==
  184. LET keepPeers == bPool.peerIds \ rmPeers IN
  185. LET pHeights ==
  186. [p \in AllPeerIds |-> IF p \in rmPeers THEN NilHeight ELSE bPool.peerHeights[p]] IN
  187. LET failedRequests ==
  188. {h \in Heights: /\ h >= bPool.height
  189. /\ \/ bPool.pendingBlocks[h] \in rmPeers
  190. \/ bPool.receivedBlocks[h] \in rmPeers} IN
  191. LET pBlocks ==
  192. [h \in Heights |-> IF h \in failedRequests THEN NilPeer ELSE bPool.pendingBlocks[h]] IN
  193. LET rBlocks ==
  194. [h \in Heights |-> IF h \in failedRequests THEN NilPeer ELSE bPool.receivedBlocks[h]] IN
  195. LET bStore ==
  196. [h \in Heights |-> IF h \in failedRequests THEN NilBlock ELSE bPool.blockStore[h]] IN
  197. IF keepPeers /= bPool.peerIds
  198. THEN [bPool EXCEPT
  199. !.peerIds = keepPeers,
  200. !.peerHeights = pHeights,
  201. !.pendingBlocks = pBlocks,
  202. !.receivedBlocks = rBlocks,
  203. !.blockStore = bStore
  204. ]
  205. ELSE bPool
  206. \* Add a peer.
  207. \* see https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L198
  208. AddPeer(peer, bPool) ==
  209. [bPool EXCEPT !.peerIds = bPool.peerIds \union {peer}]
  210. (*
  211. Handle StatusResponse message.
  212. If valid status response, update peerHeights.
  213. If invalid (height is smaller than the current), then remove peer.
  214. Returns new block pool.
  215. See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L667
  216. *)
  217. HandleStatusResponse(msg, bPool) ==
  218. LET peerHeight == bPool.peerHeights[msg.peerId] IN
  219. IF /\ msg.peerId \in bPool.peerIds
  220. /\ msg.height >= peerHeight
  221. THEN \* a correct response
  222. LET pHeights == [bPool.peerHeights EXCEPT ![msg.peerId] = msg.height] IN
  223. [bPool EXCEPT !.peerHeights = pHeights]
  224. ELSE RemovePeers({msg.peerId}, bPool) \* the peer has sent us message with smaller height or peer is not in our peer list
  225. (*
  226. Handle BlockResponse message.
  227. If valid block response, update blockStore, pendingBlocks and receivedBlocks.
  228. If invalid (unsolicited response or malformed block), then remove peer.
  229. Returns new block pool.
  230. See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L522
  231. *)
  232. HandleBlockResponse(msg, bPool) ==
  233. LET h == msg.block.height IN
  234. IF /\ msg.peerId \in bPool.peerIds
  235. /\ bPool.blockStore[h] = NilBlock
  236. /\ bPool.pendingBlocks[h] = msg.peerId
  237. /\ msg.block.wellFormed
  238. THEN
  239. [bPool EXCEPT
  240. !.blockStore = [bPool.blockStore EXCEPT ![h] = msg.block],
  241. !.receivedBlocks = [bPool.receivedBlocks EXCEPT![h] = msg.peerId],
  242. !.pendingBlocks = [bPool.pendingBlocks EXCEPT![h] = NilPeer]
  243. ]
  244. ELSE RemovePeers({msg.peerId}, bPool)
  245. HandleNoBlockResponse(msg, bPool) ==
  246. RemovePeers({msg.peerId}, bPool)
  247. \* Compute max peer height.
  248. \* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L440
  249. MaxPeerHeight(bPool) ==
  250. IF bPool.peerIds = AsPidSet({})
  251. THEN 0 \* no peers, just return 0
  252. ELSE LET Hts == {bPool.peerHeights[p] : p \in bPool.peerIds} IN
  253. CHOOSE max \in Hts: \A h \in Hts: h <= max
  254. (* Returns next height for which request should be sent.
  255. Returns NilHeight in case there is no height for which request can be sent.
  256. See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L454 *)
  257. FindNextRequestHeight(bPool) ==
  258. LET S == {i \in Heights:
  259. /\ i >= bPool.height
  260. /\ i <= MaxPeerHeight(bPool)
  261. /\ bPool.blockStore[i] = NilBlock
  262. /\ bPool.pendingBlocks[i] = NilPeer} IN
  263. IF S = AsIntSet({})
  264. THEN NilHeight
  265. ELSE
  266. CHOOSE min \in S: \A h \in S: h >= min
  267. \* Returns number of pending requests for a given peer.
  268. NumOfPendingRequests(bPool, peer) ==
  269. LET peerPendingRequests ==
  270. {h \in Heights:
  271. /\ h >= bPool.height
  272. /\ bPool.pendingBlocks[h] = peer
  273. }
  274. IN
  275. Cardinality(peerPendingRequests)
  276. (* Returns peer that can serve block for a given height.
  277. Returns NilPeer in case there are no such peer.
  278. See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L477 *)
  279. FindPeerToServe(bPool, h) ==
  280. LET peersThatCanServe == { p \in bPool.peerIds:
  281. /\ bPool.peerHeights[p] >= h
  282. /\ NumOfPendingRequests(bPool, p) < PEER_MAX_REQUESTS } IN
  283. LET pendingBlocks ==
  284. {i \in Heights:
  285. /\ i >= bPool.height
  286. /\ \/ bPool.pendingBlocks[i] /= NilPeer
  287. \/ bPool.blockStore[i] /= NilBlock
  288. } IN
  289. IF \/ peersThatCanServe = AsPidSet({})
  290. \/ Cardinality(pendingBlocks) >= TARGET_PENDING
  291. THEN NilPeer
  292. \* pick a peer that can serve request for height h that has minimum number of pending requests
  293. ELSE CHOOSE p \in peersThatCanServe: \A q \in peersThatCanServe:
  294. /\ NumOfPendingRequests(bPool, p) <= NumOfPendingRequests(bPool, q)
  295. \* Make a request for a block (if possible) and return a request message and block poool.
  296. CreateRequest(bPool) ==
  297. LET nextHeight == FindNextRequestHeight(bPool) IN
  298. IF nextHeight = NilHeight THEN [msg |-> AsOutMsg(NoMsg), pool |-> bPool]
  299. ELSE
  300. LET peer == FindPeerToServe(bPool, nextHeight) IN
  301. IF peer = NilPeer THEN [msg |-> AsOutMsg(NoMsg), pool |-> bPool]
  302. ELSE
  303. LET m == [type |-> "blockRequest", peerId |-> peer, height |-> nextHeight] IN
  304. LET newPool == [bPool EXCEPT
  305. !.pendingBlocks = [bPool.pendingBlocks EXCEPT ![nextHeight] = peer]
  306. ] IN
  307. [msg |-> m, pool |-> newPool]
  308. \* Returns node state, i.e., defines termination condition.
  309. \* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L432
  310. ComputeNextState(bPool) ==
  311. IF bPool.syncedBlocks = 0 \* corresponds to the syncTimeout in case no progress has been made for a period of time.
  312. THEN "finished"
  313. ELSE IF /\ bPool.height > 1
  314. /\ bPool.height >= MaxPeerHeight(bPool) \* see https://github.com/tendermint/tendermint/blob/61057a8b0af2beadee106e47c4616b279e83c920/blockchain/v2/scheduler.go#L566
  315. THEN "finished"
  316. ELSE "running"
  317. (* Verify if commit is for the given block id and if commit has enough voting power.
  318. See https://github.com/tendermint/tendermint/blob/61057a8b0af2beadee106e47c4616b279e83c920/blockchain/v2/processor_context.go#L12 *)
  319. VerifyCommit(block, lastCommit) ==
  320. PossibleCommit(block, lastCommit)
  321. (* Tries to execute next block in the pool, i.e., defines block validation logic.
  322. Returns new block pool (peers that has send invalid blocks are removed).
  323. See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/processor.go#L135 *)
  324. ExecuteBlocks(bPool) ==
  325. LET bStore == bPool.blockStore IN
  326. LET block0 == bStore[bPool.height - 1] IN
  327. \* blockPool is initialized with height = TrustedHeight + 1,
  328. \* so bStore[bPool.height - 1] is well defined
  329. LET block1 == bStore[bPool.height] IN
  330. LET block2 == bStore[bPool.height + 1] IN
  331. IF block1 = NilBlock \/ block2 = NilBlock
  332. THEN bPool \* we don't have two next consecutive blocks
  333. ELSE IF ~IsMatchingValidators(block1, block0.NextVS)
  334. \* Check that block1.VS = block0.Next.
  335. \* Otherwise, CorrectBlocksInv fails.
  336. \* In the implementation NextVS is part of the application state,
  337. \* so a mismatch can be found without access to block0.NextVS.
  338. THEN \* the block does not have the expected validator set
  339. RemovePeers({bPool.receivedBlocks[bPool.height]}, bPool)
  340. ELSE IF ~VerifyCommit(block1, block2.lastCommit)
  341. \* Verify commit of block2 based on block1.
  342. \* Interestingly, we do not have to call IsMatchingValidators.
  343. THEN \* remove the peers of block1 and block2, as they are considered faulty
  344. RemovePeers({bPool.receivedBlocks[bPool.height],
  345. bPool.receivedBlocks[bPool.height + 1]},
  346. bPool)
  347. ELSE \* all good, execute block at position height
  348. [bPool EXCEPT !.height = bPool.height + 1]
  349. \* Defines logic for pruning peers.
  350. \* See https://github.com/tendermint/tendermint/blob/dac030d6daf4d3e066d84275911128856838af4e/blockchain/v2/scheduler.go#L613
  351. TryPrunePeer(bPool, suspectedSet, isTimedOut) ==
  352. (* -----------------------------------------------------------------------------------------------------------------------*)
  353. (* Corresponds to function prunablePeers in scheduler.go file. Note that this function only checks if block has been *)
  354. (* received from a peer during peerTimeout period. *)
  355. (* Note that in case no request has been scheduled to a correct peer, or a request has been scheduled *)
  356. (* recently, so the peer hasn't responded yet, a peer will be removed as no block is received within peerTimeout. *)
  357. (* In case of faulty peers, we don't have any guarantee that they will respond. *)
  358. (* Therefore, we model this with nondeterministic behavior as it could lead to peer removal, for both correct and faulty. *)
  359. (* See scheduler.go *)
  360. (* https://github.com/tendermint/tendermint/blob/4298bbcc4e25be78e3c4f21979d6aa01aede6e87/blockchain/v2/scheduler.go#L335 *)
  361. LET toRemovePeers == bPool.peerIds \intersect suspectedSet IN
  362. (*
  363. Corresponds to logic for pruning a peer that is responsible for delivering block for the next height.
  364. The pruning logic for the next height is based on the time when a BlockRequest is sent. Therefore, if a request is sent
  365. to a correct peer for the next height (blockPool.height), it should never be removed by this check as we assume that
  366. correct peers respond timely and reliably. However, if a request is sent to a faulty peer then we
  367. might get response on time or not, which is modelled with nondeterministic isTimedOut flag.
  368. See scheduler.go
  369. https://github.com/tendermint/tendermint/blob/4298bbcc4e25be78e3c4f21979d6aa01aede6e87/blockchain/v2/scheduler.go#L617
  370. *)
  371. LET nextHeightPeer == bPool.pendingBlocks[bPool.height] IN
  372. LET prunablePeers ==
  373. IF /\ nextHeightPeer /= NilPeer
  374. /\ nextHeightPeer \in FAULTY
  375. /\ isTimedOut
  376. THEN toRemovePeers \union {nextHeightPeer}
  377. ELSE toRemovePeers
  378. IN
  379. RemovePeers(prunablePeers, bPool)
  380. \* Handle SyncTimeout. It models if progress has been made (height has increased) since the last SyncTimeout event.
  381. HandleSyncTimeout(bPool) ==
  382. [bPool EXCEPT
  383. !.syncedBlocks = bPool.height - bPool.syncHeight,
  384. !.syncHeight = bPool.height
  385. ]
  386. HandleResponse(msg, bPool) ==
  387. IF msg.type = "blockResponse" THEN
  388. HandleBlockResponse(msg, bPool)
  389. ELSE IF msg.type = "noBlockResponse" THEN
  390. HandleNoBlockResponse(msg, bPool)
  391. ELSE IF msg.type = "statusResponse" THEN
  392. HandleStatusResponse(msg, bPool)
  393. ELSE IF msg.type = "addPeer" THEN
  394. AddPeer(msg.peerId, bPool)
  395. ELSE IF msg.type = "removePeer" THEN
  396. RemovePeers({msg.peerId}, bPool)
  397. ELSE IF msg.type = "syncTimeout" THEN
  398. HandleSyncTimeout(bPool)
  399. ELSE
  400. bPool
  401. (*
  402. At every node step we executed the following steps (atomically):
  403. 1) input message is consumed and the corresponding handler is called,
  404. 2) pruning logic is called
  405. 3) block execution is triggered (we try to execute block at next height)
  406. 4) a request to a peer is made (if possible) and
  407. 5) we decide if termination condition is satisifed so we stop.
  408. *)
  409. NodeStep ==
  410. \E suspectedSet \in SUBSET AllPeerIds: \* suspectedSet is a nondeterministic set of peers
  411. \E isTimedOut \in BOOLEAN:
  412. LET bPool == HandleResponse(inMsg, blockPool) IN
  413. LET bp == TryPrunePeer(bPool, suspectedSet, isTimedOut) IN
  414. LET nbPool == ExecuteBlocks(bp) IN
  415. LET msgAndPool == CreateRequest(nbPool) IN
  416. LET nstate == ComputeNextState(msgAndPool.pool) IN
  417. /\ state' = nstate
  418. /\ blockPool' = msgAndPool.pool
  419. /\ outMsg' = msgAndPool.msg
  420. /\ inMsg' = AsInMsg(NoMsg)
  421. \* If node is running, then in every step we try to create blockRequest.
  422. \* In addition, input message (if exists) is consumed and processed.
  423. NextNode ==
  424. \/ /\ state = "running"
  425. /\ NodeStep
  426. \/ /\ state = "finished"
  427. /\ UNCHANGED <<nvars, inMsg, outMsg>>
  428. (********************************** Peers ***********************************)
  429. InitPeers ==
  430. \E pHeights \in [AllPeerIds -> Heights]:
  431. peersState = [
  432. peerHeights |-> pHeights,
  433. statusRequested |-> FALSE,
  434. blocksRequested |-> AsOutMsgSet({})
  435. ]
  436. HandleStatusRequest(msg, pState) ==
  437. [pState EXCEPT
  438. !.statusRequested = TRUE
  439. ]
  440. HandleBlockRequest(msg, pState) ==
  441. [pState EXCEPT
  442. !.blocksRequested = pState.blocksRequested \union AsOutMsgSet({msg})
  443. ]
  444. HandleRequest(msg, pState) ==
  445. IF msg = AsOutMsg(NoMsg)
  446. THEN pState
  447. ELSE IF msg.type = "statusRequest"
  448. THEN HandleStatusRequest(msg, pState)
  449. ELSE HandleBlockRequest(msg, pState)
  450. CreateStatusResponse(peer, pState, anyHeight) ==
  451. LET m ==
  452. IF peer \in CORRECT
  453. THEN AsInMsg([type |-> "statusResponse", peerId |-> peer, height |-> pState.peerHeights[peer]])
  454. ELSE AsInMsg([type |-> "statusResponse", peerId |-> peer, height |-> anyHeight]) IN
  455. [msg |-> m, peers |-> pState]
  456. CreateBlockResponse(msg, pState, arbitraryBlock) ==
  457. LET m ==
  458. IF msg.peerId \in CORRECT
  459. THEN AsInMsg([type |-> "blockResponse", peerId |-> msg.peerId, block |-> CorrectBlock(msg.height)])
  460. ELSE AsInMsg([type |-> "blockResponse", peerId |-> msg.peerId, block |-> arbitraryBlock]) IN
  461. LET npState ==
  462. [pState EXCEPT
  463. !.blocksRequested = pState.blocksRequested \ {msg}
  464. ] IN
  465. [msg |-> m, peers |-> npState]
  466. GrowPeerHeight(pState) ==
  467. \E p \in CORRECT:
  468. /\ pState.peerHeights[p] < MAX_HEIGHT
  469. /\ peersState' = [pState EXCEPT !.peerHeights[p] = @ + 1]
  470. /\ inMsg' = AsInMsg(NoMsg)
  471. SendStatusResponseMessage(pState) ==
  472. /\ \E arbitraryHeight \in Heights:
  473. \E peer \in AllPeerIds:
  474. LET msgAndPeers == CreateStatusResponse(peer, pState, arbitraryHeight) IN
  475. /\ peersState' = msgAndPeers.peers
  476. /\ inMsg' = msgAndPeers.msg
  477. SendAddPeerMessage ==
  478. \E peer \in AllPeerIds:
  479. inMsg' = AsInMsg([type |-> "addPeer", peerId |-> peer])
  480. SendRemovePeerMessage ==
  481. \E peer \in AllPeerIds:
  482. inMsg' = AsInMsg([type |-> "removePeer", peerId |-> peer])
  483. SendSyncTimeoutMessage ==
  484. inMsg' = AsInMsg([type |-> "syncTimeout"])
  485. SendControlMessage ==
  486. \/ SendAddPeerMessage
  487. \/ SendRemovePeerMessage
  488. \/ SendSyncTimeoutMessage
  489. \* An extremely important property of block hashes (blockId):
  490. \* If the block hash coincides with the hash of the reference block,
  491. \* then the blocks should be equal.
  492. UnforgeableBlockId(height, block) ==
  493. block.hashEqRef => block = chain[height]
  494. \* A faulty peer cannot forge enough of the validators signatures.
  495. \* In other words: If a commit contains enough signatures from the validators (in reality 2/3, in the model all),
  496. \* then the blockID points to the block on the chain, encoded as block.lastCommit.blockIdEqRef being true
  497. \* A more precise rule should have checked that the commiters have over 2/3 of the VS's voting power.
  498. NoFork(height, block) ==
  499. (height > 1 /\ block.lastCommit.committers = chain[height - 1].VS)
  500. => block.lastCommit.blockIdEqRef
  501. \* Can be block produced by a faulty peer, assuming it cannot generate forks (basic assumption of the protocol)
  502. IsBlockByFaulty(height, block) ==
  503. /\ block.height = height
  504. /\ UnforgeableBlockId(height, block)
  505. /\ NoFork(height, block)
  506. SendBlockResponseMessage(pState) ==
  507. \* a response to a requested block: either by a correct, or by a faulty peer
  508. \/ /\ pState.blocksRequested /= AsOutMsgSet({})
  509. /\ \E msg \in pState.blocksRequested:
  510. \E block \in Blocks:
  511. /\ IsBlockByFaulty(msg.height, block)
  512. /\ LET msgAndPeers == CreateBlockResponse(msg, pState, block) IN
  513. /\ peersState' = msgAndPeers.peers
  514. /\ inMsg' = msgAndPeers.msg
  515. \* a faulty peer can always send an unsolicited block
  516. \/ \E peerId \in FAULTY:
  517. \E block \in Blocks:
  518. /\ IsBlockByFaulty(block.height, block)
  519. /\ peersState' = pState
  520. /\ inMsg' = AsInMsg([type |-> "blockResponse",
  521. peerId |-> peerId, block |-> block])
  522. SendNoBlockResponseMessage(pState) ==
  523. /\ peersState' = pState
  524. /\ inMsg' \in AsInMsgSet([type: {"noBlockResponse"}, peerId: FAULTY, height: Heights])
  525. SendResponseMessage(pState) ==
  526. \/ SendBlockResponseMessage(pState)
  527. \/ SendNoBlockResponseMessage(pState)
  528. \/ SendStatusResponseMessage(pState)
  529. NextEnvStep(pState) ==
  530. \/ SendResponseMessage(pState)
  531. \/ GrowPeerHeight(pState)
  532. \/ SendControlMessage /\ peersState' = pState
  533. \* note that we propagate pState that was missing in the previous version
  534. \* Peers consume a message and update it's local state. It then makes a single step, i.e., it sends at most single message.
  535. \* Message sent could be either a response to a request or faulty message (sent by faulty processes).
  536. NextPeers ==
  537. LET pState == HandleRequest(outMsg, peersState) IN
  538. /\ outMsg' = AsOutMsg(NoMsg)
  539. /\ NextEnvStep(pState)
  540. \* the composition of the node, the peers, the network and scheduler
  541. Init ==
  542. /\ IsCorrectChain(chain) \* initialize the blockchain
  543. /\ InitNode
  544. /\ InitPeers
  545. /\ turn = "Peers"
  546. /\ inMsg = AsInMsg(NoMsg)
  547. /\ outMsg = AsOutMsg([type |-> "statusRequest"])
  548. Next ==
  549. IF turn = "Peers"
  550. THEN
  551. /\ NextPeers
  552. /\ turn' = "Node"
  553. /\ UNCHANGED <<nvars, chain>>
  554. ELSE
  555. /\ NextNode
  556. /\ turn' = "Peers"
  557. /\ UNCHANGED <<peersState, chain>>
  558. FlipTurn ==
  559. turn' =
  560. IF turn = "Peers" THEN
  561. "Node"
  562. ELSE
  563. "Peers"
  564. \* Compute max peer height. Used as a helper operator in properties.
  565. MaxCorrectPeerHeight(bPool) ==
  566. LET correctPeers == {p \in bPool.peerIds: p \in CORRECT} IN
  567. IF correctPeers = AsPidSet({})
  568. THEN 0 \* no peers, just return 0
  569. ELSE LET Hts == {bPool.peerHeights[p] : p \in correctPeers} IN
  570. CHOOSE max \in Hts: \A h \in Hts: h <= max
  571. \* properties to check
  572. TypeOK ==
  573. /\ state \in States
  574. /\ inMsg \in InMsgs
  575. /\ outMsg \in OutMsgs
  576. /\ turn \in {"Peers", "Node"}
  577. /\ peersState \in [
  578. peerHeights: [AllPeerIds -> Heights \union {NilHeight}],
  579. statusRequested: BOOLEAN,
  580. blocksRequested:
  581. SUBSET
  582. [type: {"blockRequest"}, peerId: AllPeerIds, height: Heights]
  583. ]
  584. /\ blockPool \in [
  585. height: Heights,
  586. peerIds: SUBSET AllPeerIds,
  587. peerHeights: [AllPeerIds -> Heights \union {NilHeight}],
  588. blockStore: [Heights -> Blocks \union {NilBlock}],
  589. receivedBlocks: [Heights -> AllPeerIds \union {NilPeer}],
  590. pendingBlocks: [Heights -> AllPeerIds \union {NilPeer}],
  591. syncedBlocks: Heights \union {NilHeight, -1},
  592. syncHeight: Heights
  593. ]
  594. (* Incorrect synchronization: The last block may be never received *)
  595. Sync1 ==
  596. [](state = "finished" =>
  597. blockPool.height >= MaxCorrectPeerHeight(blockPool))
  598. Sync1AsInv ==
  599. state = "finished" => blockPool.height >= MaxCorrectPeerHeight(blockPool)
  600. (* Incorrect synchronization, as there may be a timeout *)
  601. Sync2 ==
  602. \A p \in CORRECT:
  603. \/ p \notin blockPool.peerIds
  604. \/ [] (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
  605. Sync2AsInv ==
  606. \A p \in CORRECT:
  607. \/ p \notin blockPool.peerIds
  608. \/ (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
  609. (* Correct synchronization *)
  610. Sync3 ==
  611. \A p \in CORRECT:
  612. \/ p \notin blockPool.peerIds
  613. \/ blockPool.syncedBlocks <= 0 \* timeout
  614. \/ [] (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
  615. Sync3AsInv ==
  616. \A p \in CORRECT:
  617. \/ p \notin blockPool.peerIds
  618. \/ blockPool.syncedBlocks <= 0 \* timeout
  619. \/ (state = "finished" => blockPool.height >= blockPool.peerHeights[p] - 1)
  620. (* Naive termination *)
  621. \* This property is violated, as the faulty peers may produce infinitely many responses
  622. Termination ==
  623. WF_turn(FlipTurn) => <>(state = "finished")
  624. (* Termination by timeout: the protocol terminates, if there is a timeout *)
  625. \* the precondition: fair flip turn and eventual timeout when no new blocks were synchronized
  626. TerminationByTOPre ==
  627. /\ WF_turn(FlipTurn)
  628. /\ <>(inMsg.type = "syncTimeout" /\ blockPool.height <= blockPool.syncHeight)
  629. TerminationByTO ==
  630. TerminationByTOPre => <>(state = "finished")
  631. (* The termination property when we only have correct peers *)
  632. \* as correct peers may spam the node with addPeer, removePeer, and statusResponse,
  633. \* we have to enforce eventual response (there are no queues in our spec)
  634. CorrBlockResponse ==
  635. \A h \in Heights:
  636. [](outMsg.type = "blockRequest" /\ outMsg.height = h
  637. => <>(inMsg.type = "blockResponse" /\ inMsg.block.height = h))
  638. \* a precondition for termination in presence of only correct processes
  639. TerminationCorrPre ==
  640. /\ FAULTY = AsPidSet({})
  641. /\ WF_turn(FlipTurn)
  642. /\ CorrBlockResponse
  643. \* termination when there are only correct processes
  644. TerminationCorr ==
  645. TerminationCorrPre => <>(state = "finished")
  646. \* All synchronized blocks (but the last one) are exactly like in the reference chain
  647. CorrectBlocksInv ==
  648. \/ state /= "finished"
  649. \/ \A h \in 1..(blockPool.height - 1):
  650. blockPool.blockStore[h] = chain[h]
  651. \* A false expectation that the protocol only finishes with the blocks
  652. \* from the processes that had not been suspected in being faulty
  653. SyncFromCorrectInv ==
  654. \/ state /= "finished"
  655. \/ \A h \in 1..blockPool.height:
  656. blockPool.receivedBlocks[h] \in blockPool.peerIds \union {NilPeer}
  657. \* A false expectation that a correct process is never removed from the set of peer ids.
  658. \* A correct process may reply too late and then gets evicted.
  659. CorrectNeverSuspectedInv ==
  660. CORRECT \subseteq blockPool.peerIds
  661. BlockPoolInvariant ==
  662. \A h \in Heights:
  663. \* waiting for a block to arrive
  664. \/ /\ blockPool.receivedBlocks[h] = NilPeer
  665. /\ blockPool.blockStore[h] = NilBlock
  666. \* valid block is received and is present in the store
  667. \/ /\ blockPool.receivedBlocks[h] /= NilPeer
  668. /\ blockPool.blockStore[h] /= NilBlock
  669. /\ blockPool.pendingBlocks[h] = NilPeer
  670. (* a few simple properties that trigger counterexamples *)
  671. \* Shows execution in which peer set is empty
  672. PeerSetIsNeverEmpty == blockPool.peerIds /= AsPidSet({})
  673. \* Shows execution in which state = "finished" and MaxPeerHeight is not equal to 1
  674. StateNotFinished ==
  675. state /= "finished" \/ MaxPeerHeight(blockPool) = 1
  676. =============================================================================
  677. \*=============================================================================
  678. \* Modification History
  679. \* Last modified Fri May 29 20:41:53 CEST 2020 by igor
  680. \* Last modified Thu Apr 16 16:57:22 CEST 2020 by zarkomilosevic
  681. \* Created Tue Feb 04 10:36:18 CET 2020 by zarkomilosevic