Browse Source

Current versions of light client specs from tendermint-rs (#158)

* current versions of light client specs from tendermint-rs

* markdown lint

* linting

* links

* links

* links

Co-authored-by: Marko Baricevic <marbar3778@yahoo.com>
pull/7804/head
Josef Widder 4 years ago
committed by GitHub
parent
commit
b74b1c2b68
No known key found for this signature in database GPG Key ID: 4AEE18F83AFDEB23
25 changed files with 3891 additions and 0 deletions
  1. +1
    -0
      .markdownlint.yml
  2. +92
    -0
      rust-spec/lightclient/README.md
  3. +219
    -0
      rust-spec/lightclient/attacks/evidence-handling.md
  4. +69
    -0
      rust-spec/lightclient/detection/README.md
  5. +696
    -0
      rust-spec/lightclient/detection/detection.md
  6. +178
    -0
      rust-spec/lightclient/detection/discussions.md
  7. +289
    -0
      rust-spec/lightclient/detection/draft-functions.md
  8. +345
    -0
      rust-spec/lightclient/detection/req-ibc-detection.md
  9. BIN
      rust-spec/lightclient/experiments.png
  10. +49
    -0
      rust-spec/lightclient/verification/001bmc-apalache.csv
  11. +171
    -0
      rust-spec/lightclient/verification/Blockchain_A_1.tla
  12. +440
    -0
      rust-spec/lightclient/verification/Lightclient_A_1.tla
  13. +15
    -0
      rust-spec/lightclient/verification/MC4_3_correct.tla
  14. +15
    -0
      rust-spec/lightclient/verification/MC4_3_faulty.tla
  15. +15
    -0
      rust-spec/lightclient/verification/MC4_4_faulty.tla
  16. +15
    -0
      rust-spec/lightclient/verification/MC4_5_correct.tla
  17. +15
    -0
      rust-spec/lightclient/verification/MC4_5_faulty.tla
  18. +15
    -0
      rust-spec/lightclient/verification/MC4_6_faulty.tla
  19. +15
    -0
      rust-spec/lightclient/verification/MC4_7_faulty.tla
  20. +15
    -0
      rust-spec/lightclient/verification/MC5_5_correct.tla
  21. +15
    -0
      rust-spec/lightclient/verification/MC5_5_faulty.tla
  22. +15
    -0
      rust-spec/lightclient/verification/MC5_7_faulty.tla
  23. +15
    -0
      rust-spec/lightclient/verification/MC7_5_faulty.tla
  24. +15
    -0
      rust-spec/lightclient/verification/MC7_7_faulty.tla
  25. +1162
    -0
      rust-spec/lightclient/verification/verification.md

+ 1
- 0
.markdownlint.yml View File

@ -1,4 +1,5 @@
default: true
MD001: false
MD007: { indent: 4 }
MD013: false
MD024: { siblings_only: true }


+ 92
- 0
rust-spec/lightclient/README.md View File

@ -0,0 +1,92 @@
# Light Client Specification
This directory contains work-in-progress English and TLA+ specifications for the Light Client
protocol. Implementations of the light client can be found in
[Rust](https://github.com/informalsystems/tendermint-rs/tree/master/light-client) and
[Go](https://github.com/tendermint/tendermint/tree/master/light).
Light clients are assumed to be initialized once from a trusted source
with a trusted header and validator set. The light client
protocol allows a client to then securely update its trusted state by requesting and
verifying a minimal set of data from a network of full nodes (at least one of which is correct).
The light client is decomposed into three components:
- Commit Verification - verify signed headers and associated validator set changes from a single full node
- Fork Detection - verify commits across multiple full nodes and detect conflicts (ie. the existence of forks)
- Fork Accountability - given a fork, which validators are responsible for it.
## Commit Verification
The [English specification](verification/verification.md) describes the light client
commit verification problem in terms of the temporal properties
[LCV-DIST-SAFE.1](https://github.com/informalsystems/tendermint-rs/blob/master/docs/spec/lightclient/verification/verification.md#lcv-dist-safe1) and
[LCV-DIST-LIVE.1](https://github.com/informalsystems/tendermint-rs/blob/master/docs/spec/lightclient/verification/verification.md#lcv-dist-live1).
Commit verification is assumed to operate within the Tendermint Failure Model, where +2/3 of validators are correct for some time period and
validator sets can change arbitrarily at each height.
A light client protocol is also provided, including all checks that
need to be performed on headers, commits, and validator sets
to satisfy the temporal properties - so a light client can continuously
synchronize with a blockchain. Clients can skip possibly
many intermediate headers by exploiting overlap in trusted and untrusted validator sets.
When there is not enough overlap, a bisection routine can be used to find a
minimal set of headers that do provide the required overlap.
The [TLA+ specification](verification/Lightclient_A_1.tla) is a formal description of the
commit verification protocol executed by a client, including the safety and
liveness properties, which can be model checked with Apalache.
The `MC*.tla` files contain concrete parameters for the
[TLA+ specification](verification/Lightclient_A_1.tla), in order to do model checking.
For instance, [MC4_3_faulty.tla](verification/MC4_3_faulty.tla) contains the following parameters
for the nodes, heights, the trusting period, and correctness of the primary node:
```tla
AllNodes == {"n1", "n2", "n3", "n4"}
TRUSTED_HEIGHT == 1
TARGET_HEIGHT == 3
TRUSTING_PERIOD == 1400 \* two weeks, one day is 100 time units :-)
IS_PRIMARY_CORRECT == FALSE
```
To run a complete set of experiments, clone [apalache](https://github.com/informalsystems/apalache) and [apalache-tests](https://github.com/informalsystems/apalache-tests) into a directory `$DIR` and run the following commands:
```sh
$DIR/apalache-tests/scripts/mk-run.py --memlimit 28 001bmc-apalache.csv $DIR/apalache . out
./out/run-all.sh
```
After the experiments have finished, you can collect the logs by executing the following command:
```sh
cd ./out
$DIR/apalache-tests/scripts/parse-logs.py --human .
```
The following table summarizes the experimental results. The TLA+ properties can be found in the
[TLA+ specification](verification/Lightclient_A_1.tla).
The experiments were run in an AWS instance equipped with 32GB
RAM and a 4-core Intel® Xeon® CPU E5-2686 v4 @ 2.30GHz CPU.
We write “✗=k” when a bug is reported at depth k, and “✓<=k” when
no bug is reported up to depth k.
![Experimental results](experiments.png)
## Fork Detection
This is a work-in-progress draft.
The [English specification](detection/detection.md) defines blockchain forks and describes
the problem of a light client detecting them from communication with a network
of full nodes, where at least one is correct.
There is no TLA+ yet.
## Fork Accountability
There is no English specification yet. TODO: Jovan's work?
TODO: there is a WIP [TLA+
specification](https://github.com/informalsystems/verification/pull/13) in the
verification repo that should be moved over here.

+ 219
- 0
rust-spec/lightclient/attacks/evidence-handling.md View File

@ -0,0 +1,219 @@
# Light client attacks
We define a light client attack as detection of conflicting headers for a given height that can be verified
starting from the trusted light block. A light client attack is defined in the context of interactions of
light client with two peers. One of the peers (called primary) defines a trace of verified light blocks
(primary trace) that are being checked against trace of the other peer (called witness) that we call
witness trace.
A light client attack is defined by the primary and witness traces
that have a common root (the same trusted light block for a common height) but forms
conflicting branches (end of traces is for the same height but with different headers).
Note that conflicting branches could be arbitrarily big as branches continue to diverge after
a bifurcation point. We propose an approach that allows us to define a valid light client attack
only with a common light block and a single conflicting light block. We rely on the fact that
we assume that the primary is under suspicion (therefore not trusted) and that the witness plays
support role to detect and process an attack (therefore trusted). Therefore, once a light client
detects an attack, it needs to send to a witness only missing data (common height
and conflicting light block) as it has its trace. Keeping light client attack data of constant size
saves bandwidth and reduces an attack surface. As we will explain below, although in the context of
light client core
[verification](https://github.com/informalsystems/tendermint-rs/tree/master/docs/spec/lightclient/verification)
the roles of primary and witness are clearly defined,
in case of the attack, we run the same attack detection procedure twice where the roles are swapped.
The rationale is that the light client does not know what peer is correct (on a right main branch)
so it tries to create and submit an attack evidence to both peers.
Light client attack evidence consists of a conflicting light block and a common height.
```go
type LightClientAttackEvidence struct {
ConflictingBlock LightBlock
CommonHeight int64
}
```
Full node can validate a light client attack evidence by executing the following procedure:
```go
func IsValid(lcaEvidence LightClientAttackEvidence, bc Blockchain) boolean {
commonBlock = GetLightBlock(bc, lcaEvidence.CommonHeight)
if commonBlock == nil return false
// Note that trustingPeriod in ValidAndVerified is set to UNBONDING_PERIOD
verdict = ValidAndVerified(commonBlock, lcaEvidence.ConflictingBlock)
conflictingHeight = lcaEvidence.ConflictingBlock.Header.Height
return verdict == OK and bc[conflictingHeight].Header != lcaEvidence.ConflictingBlock.Header
}
```
## Light client attack creation
Given a trusted light block `trusted`, a light node executes the bisection algorithm to verify header
`untrusted` at some height `h`. If the bisection algorithm succeeds, then the header `untrusted` is verified.
Headers that are downloaded as part of the bisection algorithm are stored in a store and they are also in
the verified state. Therefore, after the bisection algorithm successfully terminates we have a trace of
the light blocks ([] LightBlock) we obtained from the primary that we call primary trace.
### Primary trace
The following invariant holds for the primary trace:
- Given a `trusted` light block, target height `h`, and `primary_trace` ([] LightBlock):
*primary_trace[0] == trusted* and *primary_trace[len(primary_trace)-1].Height == h* and
successive light blocks are passing light client verification logic.
### Witness with a conflicting header
The verified header at height `h` is cross-checked with every witness as part of
[detection](https://github.com/informalsystems/tendermint-rs/tree/master/docs/spec/lightclient/detection).
If a witness returns the conflicting header at the height `h` the following procedure is executed to verify
if the conflicting header comes from the valid trace and if that's the case to create an attack evidence:
#### Helper functions
We assume the following helper functions:
```go
// Returns trace of verified light blocks starting from rootHeight and ending with targetHeight.
Trace(lightStore LightStore, rootHeight int64, targetHeight int64) LightBlock[]
// Returns validator set for the given height
GetValidators(bc Blockchain, height int64) Validator[]
// Returns validator set for the given height
GetValidators(bc Blockchain, height int64) Validator[]
// Return validator addresses for the given validators
GetAddresses(vals Validator[]) ValidatorAddress[]
```
```go
func DetectLightClientAttacks(primary PeerID,
primary_trace []LightBlock,
witness PeerID) (LightClientAttackEvidence, LightClientAttackEvidence) {
primary_lca_evidence, witness_trace = DetectLightClientAttack(primary_trace, witness)
witness_lca_evidence = nil
if witness_trace != nil {
witness_lca_evidence, _ = DetectLightClientAttack(witness_trace, primary)
}
return primary_lca_evidence, witness_lca_evidence
}
func DetectLightClientAttack(trace []LightBlock, peer PeerID) (LightClientAttackEvidence, []LightBlock) {
lightStore = new LightStore().Update(trace[0], StateTrusted)
for i in 1..len(trace)-1 {
lightStore, result = VerifyToTarget(peer, lightStore, trace[i].Header.Height)
if result == ResultFailure then return (nil, nil)
current = lightStore.Get(trace[i].Header.Height)
// if obtained header is the same as in the trace we continue with a next height
if current.Header == trace[i].Header continue
// we have identified a conflicting header
commonBlock = trace[i-1]
conflictingBlock = trace[i]
return (LightClientAttackEvidence { conflictingBlock, commonBlock.Header.Height },
Trace(lightStore, trace[i-1].Header.Height, trace[i].Header.Height))
}
return (nil, nil)
}
```
## Evidence handling
As part of on chain evidence handling, full nodes identifies misbehaving processes and informs
the application, so they can be slashed. Note that only bonded validators should
be reported to the application. There are three types of attacks that can be executed against
Tendermint light client:
- lunatic attack
- equivocation attack and
- amnesia attack.
We now specify the evidence handling logic.
```go
func detectMisbehavingProcesses(lcAttackEvidence LightClientAttackEvidence, bc Blockchain) []ValidatorAddress {
assume IsValid(lcaEvidence, bc)
// lunatic light client attack
if !isValidBlock(current.Header, conflictingBlock.Header) {
conflictingCommit = lcAttackEvidence.ConflictingBlock.Commit
bondedValidators = GetNextValidators(bc, lcAttackEvidence.CommonHeight)
return getSigners(conflictingCommit) intersection GetAddresses(bondedValidators)
// equivocation light client attack
} else if current.Header.Round == conflictingBlock.Header.Round {
conflictingCommit = lcAttackEvidence.ConflictingBlock.Commit
trustedCommit = bc[conflictingBlock.Header.Height+1].LastCommit
return getSigners(trustedCommit) intersection getSigners(conflictingCommit)
// amnesia light client attack
} else {
HandleAmnesiaAttackEvidence(lcAttackEvidence, bc)
}
}
// Block validity in this context is defined by the trusted header.
func isValidBlock(trusted Header, conflicting Header) boolean {
return trusted.ValidatorsHash == conflicting.ValidatorsHash and
trusted.NextValidatorsHash == conflicting.NextValidatorsHash and
trusted.ConsensusHash == conflicting.ConsensusHash and
trusted.AppHash == conflicting.AppHash and
trusted.LastResultsHash == conflicting.LastResultsHash
}
func getSigners(commit Commit) []ValidatorAddress {
signers = []ValidatorAddress
for (i, commitSig) in commit.Signatures {
if commitSig.BlockIDFlag == BlockIDFlagCommit {
signers.append(commitSig.ValidatorAddress)
}
}
return signers
}
```
Note that amnesia attack evidence handling involves more complex processing, i.e., cannot be
defined simply on amnesia attack evidence. We explain in the following section a protocol
for handling amnesia attack evidence.
### Amnesia attack evidence handling
Detecting faulty processes in case of the amnesia attack is more complex and cannot be inferred
purely based on attack evidence data. In this case, in order to detect misbehaving processes we need
access to votes processes sent/received during the conflicting height. Therefore, amnesia handling assumes that
validators persist all votes received and sent during multi-round heights (as amnesia attack
is only possible in heights that executes over multiple rounds, i.e., commit round > 0).
To simplify description of the algorithm we assume existence of the trusted oracle called monitor that will
drive the algorithm and output faulty processes at the end. Monitor can be implemented in a
distributed setting as on-chain module. The algorithm works as follows:
1) Monitor sends votesets request to validators of the conflicting height. Validators
are expected to send their votesets within predefined timeout.
2) Upon receiving votesets request, validators send their votesets to a monitor.
2) Validators which have not sent its votesets within timeout are considered faulty.
3) The preprocessing of the votesets is done. That means that the received votesets are analyzed
and each vote (valid) sent by process p is added to the voteset of the sender p. This phase ensures that
votes sent by faulty processes observed by at least one correct validator cannot be excluded from the analysis.
4) Votesets of every validator are analyzed independently to decide whether the validator is correct or faulty.
A faulty validators is the one where at least one of those invalid transitions is found:
- More than one PREVOTE message is sent in a round
- More than one PRECOMMIT message is sent in a round
- PRECOMMIT message is sent without receiving +2/3 of voting-power equivalent
appropriate PREVOTE messages
- PREVOTE message is sent for the value V’ in round r’ and the PRECOMMIT message had
been sent for the value V in round r by the same process (r’ > r) and there are no
+2/3 of voting-power equivalent PREVOTE(vr, V’) messages (vr ≥ 0 and vr > r and vr < r)
as the justification for sending PREVOTE(r’, V’)

+ 69
- 0
rust-spec/lightclient/detection/README.md View File

@ -0,0 +1,69 @@
# Tendermint fork detection and IBC fork detection
## Status
This is a work in progress.
This directory captures the ongoing work and discussion on fork
detection both in the context of a Tendermint light node and in the
context of IBC. It contains the following files
### [detection.md](./detection.md)
a draft of the light node fork detection including "proof of fork"
definition, that is, the data structure to submit evidence to full
nodes.
### [discussions.md](./discussions.md)
A collection of ideas and intuitions from recent discussions
- the outcome of recent discussion
- a sketch of the light client supervisor to provide the context in
which fork detection happens
- a discussion about lightstore semantics
### [req-ibc-detection.md](./req-ibc-detection.md)
- a collection of requirements for fork detection in the IBC
context. In particular it contains a section "Required Changes in
ICS 007" with necessary updates to ICS 007 to support Tendermint
fork detection
### [draft-functions.md](./draft-functions.md)
In order to address the collected requirements, we started to sketch
some functions that we will need in the future when we specify in more
detail the
- fork detections
- proof of fork generation
- proof of fork verification
on the following components.
- IBC on-chain components
- Relayer
### TODOs
We decided to merge the files while there are still open points to
address to record the current state an move forward. In particular,
the following points need to be addressed:
- <https://github.com/informalsystems/tendermint-rs/pull/479#discussion_r466504876>
- <https://github.com/informalsystems/tendermint-rs/pull/479#discussion_r466493900>
- <https://github.com/informalsystems/tendermint-rs/pull/479#discussion_r466489045>
- <https://github.com/informalsystems/tendermint-rs/pull/479#discussion_r466491471>
Most likely we will write a specification on the light client
supervisor along the outcomes of
- <https://github.com/informalsystems/tendermint-rs/pull/509>
that also addresses initialization
- <https://github.com/tendermint/spec/issues/131>

+ 696
- 0
rust-spec/lightclient/detection/detection.md View File

@ -0,0 +1,696 @@
# Fork detector
> ***This an unfinished draft. Comments are welcome!***
A detector (or detector for short) is a mechanism that expects as
input a header with some height *h*, connects to different Tendermint
full nodes, requests the header of height *h* from them, and then
cross-checks the headers and the input header.
There are two foreseeable use cases:
1) strengthen the light client: If a light client accepts a header
*hd* (after performing skipping or sequential verification), it can
use the detector to probe the system for conflicting headers and
increase the trust in *hd*. Instead of communicating with a single
full node, communicating with several full nodes shall increase the
likelihood to be aware of a fork (see [[accountability]] for
discussion about forks) in case there is one.
2) to support fork accountability: In the case when more than 1/3 of
the voting power is held by faulty validators, faulty nodes may
generate two conflicting headers for the same height. The goal of the
detector is to learn about the conflicting headers by probing
different full nodes. Once a detector has two conflicting headers,
these headers are evidence of misbehavior. A natural extension is to
use the detector within a monitor process (on a full node) that calls
the detector on a sample (or all) headers (in parallel). (If the
sample is chosen at random, this adds a level of probabilistic
reasoning.) If conflicting headers are found, they are evidence that
can be used for punishing processes.
In this document we will focus onn strengthening the light client, and
leave other uses of the detection mechanism (e.g., when run on a full
node) to the future.
## Context of this document
The light client verification specification [[verification]] is
designed for the Tendermint failure model (1/3 assumption)
[TMBC-FM-2THIRDS]. It is safe under this assumption, and live
if it can reliably (that is, no message loss, no duplication, and
eventually delivered) and timely communicate with a correct full node. If
this assumption is violated, the light client can be fooled to trust a
header that was not generated by Tendermint consensus.
This specification, the fork detector, is a "second line of defense",
in case the 1/3 assumption is violated. Its goal is to detect fork (conflicting headers) and collect
evidence. However, it is impractical to probe all full nodes. At this
time we consider a simple scheme of maintaining an address book of
known full nodes from which a small subset (e.g., 4) are chosen
initially to communicate with. More involved book keeping with
probabilistic guarantees can be considered at later stages of the
project.
The light client maintains a simple address book containing addresses
of full nodes that it can pick as primary and secondaries. To obtain
a new header, the light client first does [verification][verification]
with the primary, and then cross-checks the header with the
secondaries using this specification.
### Tendermint Consensus and Forks
#### **[TMBC-GENESIS.1]**
Let *Genesis* be the agreed-upon initial block (file).
#### **[TMBC-FUNC.1]**
> **TODO** be more precise. +2/3 of b.NextV = c.Val signed c. For now
> the following will do:
Let b and c be two light blocks
with *b.Header.Height + 1 = c.Header.Height*. We define **signs(b,c)**
iff `ValidAndVerified(b, c)`
> **TODO** be more precise. +1/3 of b.NextV signed c. For now
> the following will do:
Let *b* and *c* be two light blocks. We define **supports(b,c,t)**
iff `ValidAndVerified(b, c)` at time *t*
> The following formalizes that *b* was properly generated by
> Tendermint; *b* can be traced back to genesis
#### **[TMBC-SEQ-ROOTED.1]**
Let *b* be a light block.
We define *sequ-rooted(b)* iff for all i, 1 <= i < h = b.Header.Height,
there exist light blocks a(i) s.t.
- *a(1) = Genesis* and
- *a(h) = b* and
- *signs( a(i) , a(i+1) )*.
> The following formalizes that *c* is trusted based on *b* in
> skipping verification. Observe that we do not require here (yet)
> that *b* was properly generated.
#### **[TMBC-SKIP-ROOT.1]**
Let *b* and *c* be light blocks. We define *skip-root(b,c,t)* if at
time t there exists an *h* and a sequence *a(1)*, ... *a(h)* s.t.
- *a(1) = b* and
- *a(h) = c* and
- *supports( a(i), a(i+1), t)*, for all i, *1 <= i < h*.
> **TODO** In the above we might use a sequence of times t(i). Not sure.
> **TODO:** I believe the following definition
> corresponds to **Slashable fork** in
> [forks][tendermintfork]. Please confirm!
> Observe that sign-skip-match is even defined if there is a fork on
> the chain.
#### **[TMBC-SIGN-SKIP-MATCH.1]**
Let *a*, *b*, *c*, be light blocks and *t* a time, we define
*sign-skip-match(a,b,c,t) = true* iff
- *sequ-rooted(a)* and
<!-- - *sequ-rooted(b)* and -->
- *b.Header.Height = c.Header.Height* and
- *skip-root(a,b,t)*
- *skip-root(a,c,t)*
implies *b = c*.
#### **[TMBC-SIGN-FORK.1]**
If there exists three light blocks a, b, and c, with
*sign-skip-match(a,b,c,t) =
false* then we have a *slashable fork*.
We call *a* a bifurcation block of the fork.
We say we have **a fork at height** *b.Header.Height*.
> The lightblock *a* need not be unique, that is, a there may be
> several blocks that satisfy the above requirement for blocks *b* and
> *c*.
> **TODO:** I think the following definition is
> the intuition behind **main chain forks**
> in the document on [forks][tendermintfork]. However, main chain
> forks were defined more operational "forks that are observed by
> full nodes as part of normal Tendermint consensus protocol". Please
> confirm!
#### **[TMBC-SIGN-UNIQUE.1]**
Let *b* and *c* be light blocks, we define *sign-unique(b,c) =
true* iff
- *b.Header.Height = c.Header.Height* and
- *sequ-rooted(b)* and
- *sequ-rooted(c)*
implies *b = c*.
If there exists two light blocks b and c, with *sign-unique(b,c) =
false* then we have a *fork on the chain*.
> The following captures what I believe is called a light client fork
> in our discussions. There is no fork on the chain up to the height
> of block b. However, c is of that height, is different, and passes skipping
> verification
> Observe that a light client fork is a special case of a slashable
> fork.
#### **[TMBC-LC-FORK.1]**
Let *a*, *b*, *c*, be light blocks and *t* a time. We define
*light-client-fork(a,b,c,t)* iff
- *sign-skip-match(a,b,c,t) = false* and
- *sequ-rooted(b)* and
- *b* is "unique", that is, for all *d*, *sequ-rooted(d)* and
*d.Header.Height=b.Header.Height* implies *d = b*
> Finally, let's also define bogus blocks that have no support.
> Observe that bogus is even defined if there is a fork on the chain.
> Also, for the definition it would be sufficient to restrict *a* to
> *a.height < b.height* (which is implied by the definitions which
> unfold until *supports()*.
#### **[TMBC-BOGUS.1]**
Let *b* be a light block and *t* a time. We define *bogus(b,t)* iff
- *sequ-rooted(b) = false* and
- for all *a*, *sequ-rooted(a)* implies *skip-root(a,b,t) = false*
> Relation to [fork accountability][accountability]: F1, F2, and F3
> (equivocation, amnesia, back to the past) can lead to a fork on the
> chain and to a light client fork.
> F4 and F5 (phantom validators, lunatic) cannot lead to a fork on the
> chain but to a light client
> fork if *t+1 < f < 2t+1*.
> F4 and F5 can also lead to bogus blocks
### Informal Problem statement
> We put tags to informal problem statements as there is no sequential
> specification.
The following requirements are operational in that they describe how
things should be done, rather than what should be done. However, they
do not constitute temporal logic verification conditions. For those,
see [LCD-DIST-*] below.
#### **[LCD-IP-STATE.1]**
The detector works on a LightStore that contains LightBlocks in one of
the state `StateUnverified`, `StateVerified`, `StateFailed`, or
`StateTrusted`.
#### **[LCD-IP-Q.1]**
Whenever the light client verifier performs `VerifyToTarget` with the
primary and returns with
`(lightStore, ResultSuccess)`, the
detector should query the secondaries by calling `FetchLightBlock` for height
*LightStore.LatestVerified().Height* remotely.
Then,
the detector returns the set of all headers *h'* downloaded from
secondaries that satisfy
- *h'* is different from *LightStore.LatestVerified()*
- *h'* is a (light) fork.
#### **[LCD-IP-PEERSET.1]**
Whenever the detector observes *detectable misbehavior* of a full node
from the set of Secondaries it should be replaced by a fresh full
node. (A full node that has not been primary or secondary
before). Detectable misbehavior can be
- a timeout
- The case where *h'* is different
from *LightStore.LatestVerified()* but *h'* is not a fork, that is, if
*h'* is bogus. In other words, the
secondary cannot provide a sequence of light blocks that constitutes
proof of *h'*.
## Assumptions/Incentives/Environment
It is not in the interest of faulty full nodes to talk to the
detector as long as the detector is connected to at least one
correct full node. This would only increase the likelihood of
misbehavior being detected. Also we cannot punish them easily
(cheaply). The absence of a response need not be the fault of the full
node.
Correct full nodes have the incentive to respond, because the
detector may help them to understand whether their header is a good
one. We can thus base liveness arguments of the detector on
the assumptions that correct full nodes reliably talk to the
detector.
### Assumptions
#### **[LCD-A-CorrFull.1]**
At all times there is at least one correct full
node among the primary and the secondaries.
**Remark:** Check whether [LCD-A-CorrFull.1] is not needed in the end because
the verification conditions [LCD-DIST-*] have preconditions on specific
cases where primary and/or secondaries are faulty.
#### **[LCD-A-RelComm.1]**
Communication between the detector and a correct full node is
reliable and bounded in time. Reliable communication means that
messages are not lost, not duplicated, and eventually delivered. There
is a (known) end-to-end delay *Delta*, such that if a message is sent
at time *t* then it is received and processed by time *t + Delta*.
This implies that we need a timeout of at least *2 Delta* for remote
procedure calls to ensure that the response of a correct peer arrives
before the timeout expires.
## (Distributed) Problem statement
> As the fork detector from the beginning is there to reduce the
> impact of faulty nodes, and faulty nodes imply that there is a
> distributed system, there is no sequential specification.
The detector gets as input a lightstore *lightStore*.
Let *h-verified = lightStore.LatestVerified().Height* and
*h-trust=lightStore.LatestTrusted().Height* (see
[LCV-DATA-LIGHTSTORE]).
It queries the secondaries for headers at height *h-verified*.
The detector returns a set *PoF* of *Proof of Forks*, and should satisfy the following
temporal formulas:
#### **[LCD-DIST-INV.1]**
If there is no fork at height *h-verified* ([TMBC-SIGN-FORK.1]),
then the detector should return the empty set.
> If the empty set is returned the supervisor will change the state of
> the header at height *h-verified* to *stateTrusted*.
#### **[LCD-DIST-LIVE-FORK.1]**
If there is a fork at height *h-verified*, and
there are two correct full nodes *i* and *j* that are
- on different branches, and
- *i* is primary and
- *j* is secondary,
then the detector eventually outputs the fork.
#### **[LCD-DIST-LIVE-FORK-FAULTY.1]**
If there is a fork at height *h-verified*, and
there is a correct secondary that is on a different branch than the
primary reported,
then the detector eventually outputs the fork.
> The above property is quite operational ("Than the primary
> reported"), but it captures quite closely the requirement. As the
> fork detector only makes sense in a distributed setting, and does
> not have a sequential specification, less "pure"
> specification are acceptable.
> These properties capture the following operational requirement:
>
> **[LCD-REQ-REP.1]**
> If the detector observes two conflicting headers for height *h*,
> it should try to verify both. If both are verified it should report evidence.
> If the primary reports header *h* and a secondary reports header *h'*,
> and if *h'* can be verified based on common root of trust, then
> evidence should be generated;
> By verifying we mean calling `VerifyToTarget` from the
> [[verification]] specification.
## Definitions
- A fixed set of full nodes is provided in the configuration upon
initialization. Initially this set is partitioned into
- one full node that is the *primary* (singleton set),
- a set *Secondaries* (of fixed size, e.g., 3),
- a set *FullNodes*.
- A set *FaultyNodes* of nodes that the light client suspects of being faulty; it is initially empty
- *Lightstore* as defined in the [verification specification][verification].
#### **[LCD-INV-NODES.1]:**
The detector shall maintain the following invariants:
- *FullNodes \intersect Secondaries = {}*
- *FullNodes \intersect FaultyNodes = {}*
- *Secondaries \intersect FaultyNodes = {}*
and the following transition invariant
- *FullNodes' \union Secondaries' \union FaultyNodes' = FullNodes \union Secondaries \union FaultyNodes*
> The following invariant is very useful for reasoning, and underlies
> many intuition when we
#### **[LCD-INV-TRUSTED-AGREED.1]:**
It is always the case the light client has downloaded a lightblock for height
*lightStore.LatestTrusted().Height*
from each of the current primary and the secondary, that all reported
the identical lightblock for that height.
> In the above, I guess "the identical" might be replaced with "a
> matching" to cover commits that might be different.
> The above requires us that before we pick a new secondary, we have to
> query the secondary for the header of height
> *lightStore.LatestTrusted().Height*.
## Solution
### Data Structures
Lightblocks and LightStores are
defined at [LCV-DATA-LIGHTBLOCK.1] and [LCV-DATA-LIGHTSTORE.1]. See the [verification specification][verification] for details.
> The following data structure [LCV-DATA-POF.1]
> defines a **proof of fork**. Following
> [TMBC-SIGN-FORK.1], we require two blocks *b* and *c* for the same
> height that can both be verified from a common root block *a* (using
> the skipping or the sequential method).
> [LCV-DATA-POF.1] mirrors the definition [TMBC-SIGN-FORK.1]:
> *TrustedBlock* corresponds to *a*, and *PrimaryTrace* and *SecondaryTrace*
> are traces to two blocks *b* and *c*. The traces establish that both
> *skip-root(a,b,t)* and *skip-root(a,c,t)* are satisfied.
#### **[LCV-DATA-POF.1]**
```go
type LightNodeProofOfFork struct {
TrustedBlock LightBlock
PrimaryTrace []LightBlock
SecondaryTrace []LightBlock
}
```
<!-- ```go -->
<!-- // info about the LC's last trusted block -->
<!-- type TrustedBlockInfo struct { -->
<!-- Height int -->
<!-- BlockID BlockID -->
<!-- } -->
<!-- ``` -->
#### **[LCV-DATA-POFSTORE.1]**
Proofs of Forks are stored in a structure which stores all proofs
generated during detection.
```go
type PoFStore struct {
...
}
```
In additions to the functions defined in
the [verification specification][verification], the
LightStore exposes the following function
#### **[LCD-FUNC-SUBTRACE.1]:**
```go
func (ls LightStore) Subtrace(from int, to int) LightStore
```
- Expected postcondition
- returns a lightstore that contains all lightblocks *b* from *ls*
that satisfy: *from < b.Header.Height <= to*
----
### Inter Process Communication
```go
func FetchLightBlock(peer PeerID, height Height) LightBlock
```
See the [verification specification][verification] for details.
#### **[LCD-FUNC-SUBMIT.1]:**
```go
func SubmitProofOfFork(pof LightNodeProofOfFork) Result
```
**TODO:** finalize what this should do, and what detail of
specification we need.
- Implementation remark
- Expected precondition
- none
- Expected postcondition
- submit evidence to primary and the secondary in *pof*, that is,
to
- `pof.PrimaryTrace[1].Provider`
- `pof.SecondaryTrace[1].Provider`
- **QUESTION** minimize data? We could submit to the primary only
the trace of the secondary, and vice versa. Do we need to spell
that out here? (Also, by [LCD-INV-TRUSTED-AGREED.1], we do not
need to send `pof.TrustedBlock`)
- **FUTURE WORK:** we might send *pof* to primary or all
secondaries or broadcast to all full nodes. However, in evidence
detection this might need that a full node has to check a *pof*
where both traces are not theirs. This leads to more complicated
logic at the full node, which we do not need right now.
- Error condition
- none
### Auxiliary Functions (Local)
#### **[LCD-FUNC-CROSS-CHECK.1]:**
```go
func CrossCheck(peer PeerID, testedLB LightBlock) (result) {
sh := FetchLightBlock(peer, testedLB.Height);
// as the check below only needs the header, it is sufficient
// to download the header rather than the LighBlock
if testedLB.Header == sh.Header {
return OK
}
else {
return DoesNotMatch
}
}
```
- Implementation remark
- download block and compare to previously downloaded one.
- Expected precondition
- Expected postcondition
- Error condition
#### **[LCD-FUNC-REPLACE-PRIMARY.1]:**
```go
Replace_Primary()
```
**TODO:** formalize conditions
- Implementation remark
- the primary is replaced by a secondary, and lightblocks above
trusted blocks are removed
- to maintain a constant size of secondaries, at this point we
might need to
- pick a new secondary *nsec*
- maintain [LCD-INV-TRUSTED-AGREED.1], that is,
- call `CrossCheck(nsec,lightStore.LatestTrusted()`.
If it matches we are OK, otherwise
- we repeat with another full node as new
secondary candidate
- **FUTURE:** try to do fork detection from some possibly old
lightblock in store. (Might be the approach for the
light node that assumes to be connected to correct
full nodes only from time to time)
- Expected precondition
- *FullNodes* is nonempty
- Expected postcondition
- *primary* is moved to *FaultyNodes*
- all lightblocks with height greater than
lightStore.LatestTrusted().Height are removed from *lightStore*.
- a secondary *s* is moved from *Secondaries* to primary
> this ensures that *s* agrees on the Last Trusted State
- Error condition
- if precondition is violated
#### **[LCD-FUNC-REPLACE-SECONDARY.1]:**
```go
Replace_Secondary(addr Address)
```
**TODO:** formalize conditions
- Implementation remark
- maintain [LCD-INV-TRUSTED-AGREED.1], that is,
- call `CrossCheck(nsec,lightStore.LatestTrusted()`.
If it matches we are OK, otherwise
- we might just repeat with another full node as new secondary
- **FUTURE:** try to do fork detection from some possibly old
lightblock in store. (Might be the approach for the
light node that assumes to be connected to correct
full nodes only from time to time)
- Expected precondition
- *FullNodes* is nonempty
- Expected postcondition
- addr is moved from *Secondaries* to *FaultyNodes*
- an address *a* is moved from *FullNodes* to *Secondaries*
- Error condition
- if precondition is violated
### From the verifier
```go
func VerifyToTarget(primary PeerID, lightStore LightStore,
targetHeight Height) (LightStore, Result)
```
See the [verification specification][verification] for details.
## Protocol
### Shared data of the light client
- a pool of full nodes *FullNodes* that have not been contacted before
- peer set called *Secondaries*
- primary
- lightStore
### Outline
The problem laid out is solved by calling the function `ForkDetector`
with a lightstore that contains a light block that has just been
verified by the verifier.
- **TODO:** We should clarify what is the expectation of VerifyToTarget so if it
returns TimeoutError it can be assumed faulty. I guess that
VerifyToTarget with correct full node should never terminate with
TimeoutError.
- **TODO:** clarify EXPIRED case. Can we always punish? Can we give sufficient
conditions.
### Fork Detector
#### **[LCD-FUNC-DETECTOR.1]:**
```go
func ForkDetector(ls LightStore, PoFs PoFStore)
{
testedLB := LightStore.LatestVerified()
for i, secondary range Secondaries {
if OK = CrossCheck(secondary, testedLB) {
// header matches. we do nothing.
}
else {
// [LCD-REQ-REP]
// header does not match. there is a situation.
// we try to verify sh by querying s
// we set up an auxiliary lightstore with the highest
// trusted lightblock and the lightblock we want to verify
auxLS.Init
auxLS.Update(LightStore.LatestTrusted(), StateVerified);
auxLS.Update(sh,StateUnverified);
LS,result := VerifyToTarget(secondary, auxLS, sh.Header.Height)
if (result = ResultSuccess || result = EXPIRED) {
// we verified header sh which is conflicting to hd
// there is a fork on the main blockchain.
// If return code was EXPIRED it might be too late
// to punish, we still report it.
pof = new LightNodeProofOfFork;
pof.TrustedBlock := LightStore.LatestTrusted()
pof.PrimaryTrace :=
LightStore.Subtrace(LightStore.LatestTrusted().Height,
testedLB.Height);
pof.SecondaryTrace :=
auxLS.Subtrace(LightStore.LatestTrusted().Height,
testedLB.Height);
PoFs.Add(pof);
}
else {
// secondary might be faulty or unreachable
// it might fail to provide a trace that supports sh
// or time out
Replace_Secondary(secondary)
}
}
}
return PoFs
}
```
**TODO:** formalize conditions
- Expected precondition
- Secondaries initialized and non-empty
- `PoFs` initialized and empty
- *lightStore.LatestTrusted().Height < lightStore.LatestVerified().Height*
- Expected postcondition
- satisfies [LCD-DIST-INV.1], [LCD-DIST-LIFE-FORK.1]
- removes faulty secondary if it reports wrong header
- **TODO** submit proof of fork
- Error condition
- fails if precondition is violated
- fails if [LCV-INV-TP] is violated (no trusted header within
trusting period
----
## Correctness arguments
#### Argument for [LCD-DIST-INV]
**TODO**
#### Argument for [LCD-DIST-LIFE-FORK]
**TODO**
# References
> links to other specifications/ADRs this document refers to
[[verification]] The specification of the light client verification.
[[tendermintfork]] Tendermint fork detection and accountability
[[accountability]] Fork accountability
[TMBC-FM-2THIRDS-linkVDD]: https://github.com/informalsystems/VDD/tree/master/blockchain/blockchain.md#**[TMBC-FM-2THIRDS-link]**:
[TMBC-FM-2THIRDS-link]: https://github.com/tendermint/spec/blob/master/spec/consensus/light-client/verification.md
[block]: https://github.com/tendermint/spec/blob/d46cd7f573a2c6a2399fcab2cde981330aa63f37/spec/core/data_structures.md
[blockchain]: https://github.com/informalsystems/VDD/tree/master/blockchain/blockchain.md
[lightclient]: https://github.com/interchainio/tendermint-rs/blob/e2cb9aca0b95430fca2eac154edddc9588038982/docs/architecture/adr-002-lite-client.md
[verificationVDD]: https://github.com/informalsystems/VDD/blob/master/lightclient/failuredetector.md
[verification]: https://github.com/informalsystems/tendermint-rs/blob/master/docs/spec/lightclient/verification/verification.md
[accountability]: https://github.com/tendermint/spec/blob/master/spec/consensus/light-client/accountability.md
[tendermintfork]: https://docs.google.com/document/d/1xjyp5JOPt7QfHem1AFEaowBH2Plk0IHACWtYFXFvO7E/edit#heading=h.th2369ptc2ve

+ 178
- 0
rust-spec/lightclient/detection/discussions.md View File

@ -0,0 +1,178 @@
# Results of Discussions and Decisions
- Generating a minimal proof of fork (as suggested in [Issue #5083](https://github.com/tendermint/tendermint/issues/5083)) is too costly at the light client
- we do not know all lightblocks from the primary
- therefore there are many scenarios. we might even need to ask
the primary again for additional lightblocks to isolate the
branch.
> For instance, the light node starts with block at height 1 and the
> primary provides a block of height 10 that the light node can
> verify immediately. In cross-checking, a secondary now provides a
> conflicting header b10 of height 10 that needs another header b5
> of height 5 to
> verify. Now, in order for the light node to convince the primary:
>
> - The light node cannot just sent b5, as it is not clear whether
> the fork happened before or after 5
> - The light node cannot just send b10, as the primary would also
> need b5 for verification
> - In order to minimize the evidence, the light node may try to
> figure out where the branch happens, e.g., by asking the primary
> for height 5 (it might be that more queries are required, also
> to the secondary. However, assuming that in this scenario the
> primary is faulty it may not respond.
As the main goal is to catch misbehavior of the primary,
evidence generation and punishment must not depend on their
cooperation. So the moment we have proof of fork (even if it
contains several light blocks) we should submit right away.
- decision: "full" proof of fork consists of two traces that originate in the
same lightblock and lead to conflicting headers of the same height.
- For submission of proof of fork, we may do some optimizations, for
instance, we might just submit a trace of lightblocks that verifies a block
different from the one the full node knows (we do not send the trace
the primary gave us back to the primary)
- The light client attack is via the primary. Thus we try to
catch if the primary installs a bad light block
- We do not check secondary against secondary
- For each secondary, we check the primary against one secondary
- Observe that just two blocks for the same height are not
sufficient proof of fork.
One of the blocks may be bogus [TMBC-BOGUS.1] which does
not constitute slashable behavior.
Which leads to the question whether the light node should try to do
fork detection on its initial block (from subjective
initialization). This could be done by doing backwards verification
(with the hashes) until a bifurcation block is found.
While there are scenarios where a
fork could be found, there is also the scenario where a faulty full
node feeds the light node with bogus light blocks and forces the light
node to check hashes until a bogus chain is out of the trusting period.
As a result, the light client
should not try to detect a fork for its initial header. **The initial
header must be trusted as is.**
# Light Client Sequential Supervisor
**TODO:** decide where (into which specification) to put the
following:
We describe the context on which the fork detector is called by giving
a sequential version of the supervisor function.
Roughly, it alternates two phases namely:
- Light Client Verification. As a result, a header of the required
height has been downloaded from and verified with the primary.
- Light Client Fork Detections. As a result the header has been
cross-checked with the secondaries. In case there is a fork we
submit "proof of fork" and exit.
#### **[LC-FUNC-SUPERVISOR.1]:**
```go
func Sequential-Supervisor () (Error) {
loop {
// get the next height
nextHeight := input();
// Verify
result := NoResult;
while result != ResultSuccess {
lightStore,result := VerifyToTarget(primary, lightStore, nextHeight);
if result == ResultFailure {
// pick new primary (promote a secondary to primary)
/// and delete all lightblocks above
// LastTrusted (they have not been cross-checked)
Replace_Primary();
}
}
// Cross-check
PoFs := Forkdetector(lightStore, PoFs);
if PoFs.Empty {
// no fork detected with secondaries, we trust the new
// lightblock
LightStore.Update(testedLB, StateTrusted);
}
else {
// there is a fork, we submit the proofs and exit
for i, p range PoFs {
SubmitProofOfFork(p);
}
return(ErrorFork);
}
}
}
```
**TODO:** finish conditions
- Implementation remark
- Expected precondition
- *lightStore* initialized with trusted header
- *PoFs* empty
- Expected postcondition
- runs forever, or
- is terminated by user and satisfies LightStore invariant, or **TODO**
- has submitted proof of fork upon detecting a fork
- Error condition
- none
----
# Semantics of the LightStore
Currently, a lightblock in the lightstore can be in one of the
following states:
- StateUnverified
- StateVerified
- StateFailed
- StateTrusted
The intuition is that `StateVerified` captures that the lightblock has
been verified with the primary, and `StateTrusted` is the state after
successful cross-checking with the secondaries.
Assuming there is **always one correct node among primary and
secondaries**, and there is no fork on the blockchain, lightblocks that
are in `StateTrusted` can be used by the user with the guarantee of
"finality". If a block in `StateVerified` is used, it might be that
detection later finds a fork, and a roll-back might be needed.
**Remark:** The assumption of one correct node, does not render
verification useless. It is true that if the primary and the
secondaries return the same block we may trust it. However, if there
is a node that provides a different block, the light node still needs
verification to understand whether there is a fork, or whether the
different block is just bogus (without any support of some previous
validator set).
**Remark:** A light node may choose the full nodes it communicates
with (the light node and the full node might even belong to the same
stakeholder) so the assumption might be justified in some cases.
In the future, we will do the following changes
- we assume that only from time to time, the light node is
connected to a correct full node
- this means for some limited time, the light node might have no
means to defend against light client attacks
- as a result we do not have finality
- once the light node reconnects with a correct full node, it
should detect the light client attack and submit evidence.
Under these assumptions, `StateTrusted` loses its meaning. As a
result, it should be removed from the API. We suggest that we replace
it with a flag "trusted" that can be used
- internally for efficiency reasons (to maintain
[LCD-INV-TRUSTED-AGREED.1] until a fork is detected)
- by light client based on the "one correct full node" assumption
----

+ 289
- 0
rust-spec/lightclient/detection/draft-functions.md View File

@ -0,0 +1,289 @@
# Draft of Functions for Fork detection and Proof of Fork Submisstion
This document collects drafts of function for generating and
submitting proof of fork in the IBC context
- [IBC](#on---chain-ibc-component)
- [Relayer](#relayer)
## On-chain IBC Component
> The following is a suggestions to change the function defined in ICS 007
#### [TAG-IBC-MISBEHAVIOR.1]
```go
func checkMisbehaviourAndUpdateState(cs: ClientState, PoF: LightNodeProofOfFork)
```
**TODO:** finish conditions
- Implementation remark
- Expected precondition
- PoF.TrustedBlock.Header is equal to lightBlock on store with
same height
- both traces end with header of same height
- headers are different
- both traces are supported by PoF.TrustedBlock (`supports`
defined in [TMBC-FUNC]), that is, for `t = currentTimestamp()` (see
ICS 024)
- supports(PoF.TrustedBlock, PoF.PrimaryTrace[1], t)
- supports(PoF.PrimaryTrace[i], PoF.PrimaryTrace[i+1], t) for
*0 < i < length(PoF.PrimaryTrace)*
- supports(PoF.TrustedBlock, PoF.SecondaryTrace[1], t)
- supports(PoF.SecondaryTrace[i], PoF.SecondaryTrace[i+1], t) for
*0 < i < length(PoF.SecondaryTrace)*
- Expected postcondition
- set cs.FrozenHeight to min(cs.FrozenHeight, PoF.TrustedBlock.Header.Height)
- Error condition
- none
----
> The following is a suggestions to add functionality to ICS 002 and 007.
> I suppose the above is the most efficient way to get the required
> information. Another option is to subscribe to "header install"
> events via CosmosSDK
#### [TAG-IBC-HEIGHTS.1]
```go
func QueryHeightsRange(id, from, to) ([]Height)
```
- Expected postcondition
- returns all heights *h*, with *from <= h <= to* for which the
IBC component has a consensus state.
----
> This function can be used if the relayer has no information about
> the IBC component. This allows late-joining relayers to also
> participate in fork dection and the generation in proof of
> fork. Alternatively, we may also postulate that relayers are not
> responsible to detect forks for heights before they started (and
> subscribed to the transactions reporting fresh headers being
> installed at the IBC component).
## Relayer
### Auxiliary Functions to be implemented in the Light Client
#### [LCV-LS-FUNC-GET-PREV.1]
```go
func (ls LightStore) GetPreviousVerified(height Height) (LightBlock, bool)
```
- Expected postcondition
- returns a verified LightBlock, whose height is maximal among all
verified lightblocks with height smaller than `height`
----
### Relayer Submitting Proof of Fork to the IBC Component
There are two ways the relayer can detect a fork
- by the fork detector of one of its lightclients
- be checking the consensus state of the IBC component
The following function ignores how the proof of fork was generated.
It takes a proof of fork as input and computes a proof of fork that
will be accepted by the IBC component.
The problem addressed here is that both, the relayer's light client
and the IBC component have incomplete light stores, that might
not have all light blocks in common.
Hence the relayer has to figure out what the IBC component knows
(intuitively, a meeting point between the two lightstores
computed in `commonRoot`) and compute a proof of fork
(`extendPoF`) that the IBC component will accept based on its
knowledge.
The auxiliary functions `commonRoot` and `extendPoF` are
defined below.
#### [TAG-SUBMIT-POF-IBC.1]
```go
func SubmitIBCProofOfFork(
lightStore LightStore,
PoF: LightNodeProofOfFork,
ibc IBCComponent) (Error) {
if ibc.queryChainConsensusState(PoF.TrustedBlock.Height) = PoF.TrustedBlock {
// IBC component has root of PoF on store, we can just submit
ibc.submitMisbehaviourToClient(ibc.id,PoF)
return Success
// note sure about the id parameter
}
else {
// the ibc component does not have the TrustedBlock and might
// even be on yet a different branch. We have to compute a PoF
// that the ibc component can verifiy based on its current
// knowledge
ibcLightBlock, lblock, _, result := commonRoot(lightStore, ibc, PoF.TrustedBlock)
if result = Success {
newPoF = extendPoF(ibcLightBlock, lblock, lightStore, PoF)
ibc.submitMisbehaviourToClient(ibc.id, newPoF)
return Success
}
else{
return CouldNotGeneratePoF
}
}
}
```
**TODO:** finish conditions
- Implementation remark
- Expected precondition
- Expected postcondition
- Error condition
- none
----
### Auxiliary Functions at the Relayer
> If the relayer detects a fork, it has to compute a proof of fork that
> will convince the IBC component. That is it has to compare the
> relayer's local lightstore against the lightstore of the IBC
> component, and find common ancestor lightblocks.
#### [TAG-COMMON-ROOT.1]
```go
func commonRoot(lightStore LightStore, ibc IBCComponent, lblock
LightBlock) (LightBlock, LightBlock, LightStore, Result) {
auxLS.Init
// first we ask for the heights the ibc component is aware of
ibcHeights = ibc.QueryHeightsRange(
ibc.id,
lightStore.LowestVerified().Height,
lblock.Height - 1);
// this function does not exist yet. Alternatively, we may
// request all transactions that installed headers via CosmosSDK
for {
h, result = max(ibcHeights)
if result = Empty {
return (_, _, _, NoRoot)
}
ibcLightBlock = ibc.queryChainConsensusState(h)
auxLS.Update(ibcLightBlock, StateVerified);
connector, result := Connector(lightStore, ibcLightBlock, lblock.Header.Height)
if result = success {
return (ibcLightBlock, connector, auxLS, Success)
}
else{
ibcHeights.remove(h)
}
}
}
```
- Expected postcondition
- returns
- a lightBlock b1 from the IBC component, and
- a lightBlock b2
from the local lightStore with height less than
lblock.Header.Hight, s.t. b1 supports b2, and
- a lightstore with the blocks downloaded from
the ibc component
----
#### [TAG-LS-FUNC-CONNECT.1]
```go
func Connector (lightStore LightStore, lb LightBlock, h Height) (LightBlock, bool)
```
- Expected postcondition
- returns a verified LightBlock from lightStore with height less
than *h* that can be
verified by lb in one step.
**TODO:** for the above to work we need an invariant that all verified
lightblocks form a chain of trust. Otherwise, we need a lightblock
that has a chain of trust to height.
> Once the common root is found, a proof of fork that will be accepted
> by the IBC component needs to be generated. This is done in the
> following function.
#### [TAG-EXTEND-POF.1]
```go
func extendPoF (root LightBlock,
connector LightBlock,
lightStore LightStore,
Pof LightNodeProofofFork) (LightNodeProofofFork}
```
- Implementation remark
- PoF is not sufficient to convince an IBC component, so we extend
the proof of fork farther in the past
- Expected postcondition
- returns a newPOF:
- newPoF.TrustedBlock = root
- let prefix =
connector +
lightStore.Subtrace(connector.Header.Height, PoF.TrustedBlock.Header.Height-1) +
PoF.TrustedBlock
- newPoF.PrimaryTrace = prefix + PoF.PrimaryTrace
- newPoF.SecondaryTrace = prefix + PoF.SecondaryTrace
### Detection a fork at the IBC component
The following functions is assumed to be called regularly to check
that latest consensus state of the IBC component. Alternatively, this
logic can be executed whenever the relayer is informed (via an event)
that a new header has been installed.
#### [TAG-HANDLER-DETECT-FORK.1]
```go
func DetectIBCFork(ibc IBCComponent, lightStore LightStore) (LightNodeProofOfFork, Error) {
cs = ibc.queryClientState(ibc);
lb, found := lightStore.Get(cs.Header.Height)
if !found {
**TODO:** need verify to target
lb, result = LightClient.Main(primary, lightStore, cs.Header.Height)
// [LCV-FUNC-IBCMAIN.1]
**TODO** decide what to do following the outcome of Issue #499
// I guess here we have to get into the light client
}
if cs != lb {
// IBC component disagrees with my primary.
// I fetch the
ibcLightBlock, lblock, ibcStore, result := commonRoot(lightStore, ibc, lb)
pof = new LightNodeProofOfFork;
pof.TrustedBlock := ibcLightBlock
pof.PrimaryTrace := ibcStore + cs
pof.SecondaryTrace := lightStore.Subtrace(lblock.Header.Height,
lb.Header.Height);
return(pof, Fork)
}
return(nil , NoFork)
}
```
**TODO:** finish conditions
- Implementation remark
- we ask the handler for the lastest check. Cross-check with the
chain. In case they deviate we generate PoF.
- we assume IBC component is correct. It has verified the
consensus state
- Expected precondition
- Expected postcondition

+ 345
- 0
rust-spec/lightclient/detection/req-ibc-detection.md View File

@ -0,0 +1,345 @@
# Requirements for Fork Detection in the IBC Context
## What you need to know about IBC
In the following, I distilled what I considered relevant from
<https://github.com/cosmos/ics/tree/master/spec/ics-002-client-semantics>
### Components and their interface
#### Tendermint Blockchains
> I assume you know what that is.
#### An IBC/Tendermint correspondence
| IBC Term | Tendermint-RS Spec Term | Comment |
|----------|-------------------------| --------|
| `CommitmentRoot` | AppState | app hash |
| `ConsensusState` | Lightblock | not all fields are there. NextValidator is definitly needed |
| `ClientState` | latest light block + configuration parameters (e.g., trusting period + `frozenHeight` | NextValidators missing; what is `proofSpecs`?|
| `frozenHeight` | height of fork | set when a fork is detected |
| "would-have-been-fooled" | light node fork detection | light node may submit proof of fork to IBC component to halt it |
| `Height` | (no epochs) | (epoch,height) pair in lexicographical order (`compare`) |
| `Header` | ~signed header | validatorSet explicit (no hash); nextValidators missing |
| `Evidence` | t.b.d. | definition unclear "which the light client would have considered valid". Data structure will need to change |
| `verify` | `ValidAndVerified` | signature does not match perfectly (ClientState vs. LightBlock) + in `checkMisbehaviourAndUpdateState` it is unclear whether it uses traces or goes to h1 and h2 in one step |
#### Some IBC links
- [QueryConsensusState](https://github.com/cosmos/cosmos-sdk/blob/2651427ab4c6ea9f81d26afa0211757fc76cf747/x/ibc/02-client/client/utils/utils.go#L68)
#### Required Changes in ICS 007
- `assert(height > 0)` in definition of `initialise` doesn't match
definition of `Height` as *(epoch,height)* pair.
- `initialise` needs to be updated to new data structures
- `clientState.frozenHeight` semantics seem not totally consistent in
document. E.g., `min` needs to be defined over optional value in
`checkMisbehaviourAndUpdateState`. Also, if you are frozen, why do
you accept more evidence.
- `checkValidityAndUpdateState`
- `verify`: it needs to be clarified that checkValidityAndUpdateState
does not perform "bisection" (as currently hinted in the text) but
performs a single step of "skipping verification", called,
`ValidAndVerified`
- `assert (header.height > clientState.latestHeight)`: no old
headers can be installed. This might be OK, but we need to check
interplay with misbehavior
- clienstState needs to be updated according to complete data
structure
- `checkMisbehaviourAndUpdateState`: as evidence will contain a trace
(or two), the assertion that uses verify will need to change.
- ICS 002 states w.r.t. `queryChainConsensusState` that "Note that
retrieval of past consensus states by height (as opposed to just the
current consensus state) is convenient but not required." For
Tendermint fork detection, this seems to be a necessity.
- `Header` should become a lightblock
- `Evidence` should become `LightNodeProofOfFork` [LCV-DATA-POF.1]
- `upgradeClientState` what is the semantics (in particular what is
`height` doing?).
- `checkMisbehaviourAndUpdateState(cs: ClientState, PoF:
LightNodeProofOfFork)` needs to be adapted
#### Handler
A blockchain runs a **handler** that passively collects information about
other blockchains. It can be thought of a state machine that takes
input events.
- the state includes a lightstore (I guess called `ConsensusState`
in IBC)
- The following function is used to pass a header to a handler
```go
type checkValidityAndUpdateState = (Header) => Void
```
For Tendermint, it will perform
`ValidandVerified`, that is, it does the trusting period check and the
+1/3 check (+2/3 for sequential headers).
If it verifies a header, it adds it to its lightstore,
if it does not pass verification it drops it.
Right now it only accepts a header more recent then the latest
header,
and drops older
ones or ones that could not be verified.
> The above paragraph captures what I believe what is the current
logic of `checkValidityAndUpdateState`. It may be subject to
change. E.g., maintain a lightstore with state (unverified, verified)
- The following function is used to pass "evidence" (this we
will need to make precise eventually) to a handler
```go
type checkMisbehaviourAndUpdateState = (bytes) => Void
```
We have to design this, and the data that the handler can use to
check that there was some misbehavior (fork) in order react on
it, e.g., flagging a situation and
stop the protocol.
- The following function is used to query the light store (`ConsensusState`)
```go
type queryChainConsensusState = (height: uint64) => ConsensusState
```
#### Relayer
- The active components are called **relayer**.
- a relayer contains light clients to two (or more?) blockchains
- the relayer send headers and data to the handler to invoke
`checkValidityAndUpdateState` and
`checkMisbehaviourAndUpdateState`. It may also query
`queryChainConsensusState`.
- multiple relayers may talk to one handler. Some relayers might be
faulty. We assume existence of at least single correct relayer.
## Informal Problem Statement: Fork detection in IBC
### Relayer requirement: Evidence for Handler
- The relayer should provide the handler with
"evidence" that there was a fork.
- The relayer can read the handler's consensus state. Thus the relayer can
feed the handler precisely the information the handler needs to detect a
fork.
What is this
information needs to be specified.
- The information depends on the verification the handler does. It
might be necessary to provide a bisection proof (list of
lightblocks) so that the handler can verify based on its local
lightstore a header *h* that is conflicting with a header *h'* in the
local lightstore, that is, *h != h'* and *h.Height = h'.Height*
### Relayer requirement: Fork detection
Let's assume there is a fork at chain A. There are two ways the
relayer can figure that out:
1. as the relayer contains a light client for A, it also includes a fork
detector that can detect a fork.
2. the relayer may also detect a fork by observing that the
handler for chain A (on chain B)
is on a different branch than the relayer
- in both detection scenarios, the relayer should submit evidence to
full nodes of chain A where there is a fork. As we assume a fullnode
has a complete list of blocks, it is sufficient to send "Bucky's
evidence" (<https://github.com/tendermint/tendermint/issues/5083>),
that is,
- two lightblocks from different branches +
- a lightblock (perhaps just a height) from which both blocks
can be verified.
- in the scenario 2., the relayer must feed the A-handler (on chain B)
a proof of a fork on A so that chain B can react accordingly
### Handler requirement
- there are potentially many relayers, some correct some faulty
- a handler cannot trust the information provided by the relayer,
but must verify
(Доверя́й, но проверя́й)
- in case of a fork, we accept that the handler temporarily stores
headers (tagged as verified).
- eventually, a handler should be informed
(`checkMisbehaviourAndUpdateState`)
by some relayer that it has
verified a header from a fork. Then the handler should do what is
required by IBC in this case (stop?)
### Challenges in the handler requirement
- handlers and relayers work on different lightstores. In principle
the lightstore need not intersect in any heights a priori
- if a relayer sees a header *h* it doesn't know at a handler (`queryChainConsensusState`), the
relayer needs to
verify that header. If it cannot do it locally based on downloaded
and verified (trusted?) light blocks, it might need to use
`VerifyToTarget` (bisection). To call `VerifyToTarget` we might keep
*h* in the lightstore. If verification fails, we need to download the
"alternative" header of height *h.Height* to generate evidence for
the handler.
- we have to specify what precisely `queryChainConsensusState`
returns. It cannot be the complete lightstore. Is the last header enough?
- we would like to assume that every now and then (smaller than the
trusting period) a correct relayer checks whether the handler is on a
different branch than the relayer.
And we would like that this is enough to achieve
the Handler requirement.
- here the correctness argument would be easy if a correct relayer is
based on a light client with a *trusted* state, that is, a light
client who never changes its opinion about trusted. Then if such a
correct relayer checks-in with a handler, it will detect a fork, and
act in time.
- if the light client does not provide this interface, in the case of
a fork, we need some assumption about a correct relayer being on a
different branch than the handler, and we need such a relayer to
check-in not too late. Also
what happens if the relayer's light client is forced to roll-back
its lightstore?
Does it have to re-check all handlers?
## On the interconnectedness of things
In the broader discussion of so-called "fork accountability" there are
several subproblems
- Fork detection
- Evidence creation and submission
- Isolating misbehaving nodes (and report them for punishment over abci)
### Fork detection
The preliminary specification ./detection.md formalizes the notion of
a fork. Roughly, a fork exists if there are two conflicting headers
for the same height, where both are supported by bonded full nodes
(that have been validators in the near past, that is, within the
trusting period). We distinguish between *fork on the chain* where two
conflicting blocks are signed by +2/3 of the validators of that
height, and a *light client fork* where one of the conflicting headers
is not signed by +2/3 of the current height, but by +1/3 of the
validators of some smaller height.
In principle everyone can detect a fork
- ./detection talks about the Tendermint light client with a focus on
light nodes. A relayer runs such light clients and may detect
forks in this way
- in IBC, a relayer can see that a handler is on a conflicting branch
- the relayer should feed the handler the necessary information so
that it can halt
- the relayer should report the fork to a full node
### Evidence creation and submission
- the information sent from the relayer to the handler could be called
evidence, but this is perhaps a bad idea because the information sent to a
full node can also be called evidence. But this evidence might still
not be enough as the full node might need to run the "fork
accountability" protocol to generate evidence in the form of
consensus messages. So perhaps we should
introduce different terms for:
- proof of fork for the handler (basically consisting of lightblocks)
- proof of fork for a full node (basically consisting of (fewer) lightblocks)
- proof of misbehavior (consensus messages)
### Isolating misbehaving nodes
- this is the job of a full node.
- might be subjective in the future: the protocol depends on what the
full node believes is the "correct" chain. Right now we postulate
that every full node is on the correct chain, that is, there is no
fork on the chain.
- The full node figures out which nodes are
- lunatic
- double signing
- amnesic; **using the challenge response protocol**
- We do not punish "phantom" validators
- currently we understand a phantom validator as a node that
- signs a block for a height in which it is not in the
validator set
- the node is not part of the +1/3 of previous validators that
are used to support the header. Whether we call a validator
phantom might be subjective and depend on the header we
check against. Their formalization actually seems not so
clear.
- they can only do something if there are +1/3 faulty validators
that are either lunatic, double signing, or amnesic.
- abci requires that we only report bonded validators. So if a
node is a "phantom", we would need the check whether the node is
bonded, which currently is expensive, as it requires checking
blocks from the last three weeks.
- in the future, with state sync, a correct node might be
convinced by faulty nodes that it is in the validator set. Then
it might appear to be "phantom" although it behaves correctly
## Next steps
> The following points are subject to my limited knowledge of the
> state of the work on IBC. Some/most of it might already exist and we
> will just need to bring everything together.
- "proof of fork for a full node" defines a clean interface between
fork detection and misbehavior isolation. So it should be produced
by protocols (light client, the relayer). So we should fix that
first.
- Given the problems of not having a light client architecture spec,
for the relayer we should start with this. E.g.
- the relayer runs light clients for two chains
- the relayer regularly queries consensus state of a handler
- the relayer needs to check the consensus state
- this involves local checks
- this involves calling the light client
- the relayer uses the light client to do IBC business (channels,
packets, connections, etc.)
- the relayer submits proof of fork to handlers and full nodes
> the list is definitely not complete. I think part of this
> (perhaps all) is
> covered by what Anca presented recently.
We will need to define what we expect from these components
- for the parts where the relayer talks to the handler, we need to fix
the interface, and what the handler does
- we write specs for these components.

BIN
rust-spec/lightclient/experiments.png View File

Before After
Width: 1074  |  Height: 324  |  Size: 82 KiB

+ 49
- 0
rust-spec/lightclient/verification/001bmc-apalache.csv View File

@ -0,0 +1,49 @@
no,filename,tool,timeout,init,inv,next,args
1,MC4_3_correct.tla,apalache,1h,,PositiveBeforeTrustedHeaderExpires,,--length=30
2,MC4_3_correct.tla,apalache,1h,,CorrectnessInv,,--length=30
3,MC4_3_correct.tla,apalache,1h,,PrecisionInv,,--length=30
4,MC4_3_correct.tla,apalache,1h,,SuccessOnCorrectPrimaryAndChainOfTrust,,--length=30
5,MC4_3_correct.tla,apalache,1h,,NoFailedBlocksOnSuccessInv,,--length=30
6,MC4_3_correct.tla,apalache,1h,,StoredHeadersAreVerifiedOrNotTrustedInv,,--length=30
7,MC4_3_correct.tla,apalache,1h,,CorrectPrimaryAndTimeliness,,--length=30
8,MC4_3_correct.tla,apalache,1h,,Complexity,,--length=30
9,MC4_3_faulty.tla,apalache,1h,,PositiveBeforeTrustedHeaderExpires,,--length=30
10,MC4_3_faulty.tla,apalache,1h,,CorrectnessInv,,--length=30
11,MC4_3_faulty.tla,apalache,1h,,PrecisionInv,,--length=30
12,MC4_3_faulty.tla,apalache,1h,,SuccessOnCorrectPrimaryAndChainOfTrust,,--length=30
13,MC4_3_faulty.tla,apalache,1h,,NoFailedBlocksOnSuccessInv,,--length=30
14,MC4_3_faulty.tla,apalache,1h,,StoredHeadersAreVerifiedOrNotTrustedInv,,--length=30
15,MC4_3_faulty.tla,apalache,1h,,CorrectPrimaryAndTimeliness,,--length=30
16,MC4_3_faulty.tla,apalache,1h,,Complexity,,--length=30
17,MC5_5_correct.tla,apalache,1h,,PositiveBeforeTrustedHeaderExpires,,--length=30
18,MC5_5_correct.tla,apalache,1h,,CorrectnessInv,,--length=30
19,MC5_5_correct.tla,apalache,1h,,PrecisionInv,,--length=30
20,MC5_5_correct.tla,apalache,1h,,SuccessOnCorrectPrimaryAndChainOfTrust,,--length=30
21,MC5_5_correct.tla,apalache,1h,,NoFailedBlocksOnSuccessInv,,--length=30
22,MC5_5_correct.tla,apalache,1h,,StoredHeadersAreVerifiedOrNotTrustedInv,,--length=30
23,MC5_5_correct.tla,apalache,1h,,CorrectPrimaryAndTimeliness,,--length=30
24,MC5_5_correct.tla,apalache,1h,,Complexity,,--length=30
25,MC5_5_faulty.tla,apalache,1h,,PositiveBeforeTrustedHeaderExpires,,--length=30
26,MC5_5_faulty.tla,apalache,1h,,CorrectnessInv,,--length=30
27,MC5_5_faulty.tla,apalache,1h,,PrecisionInv,,--length=30
28,MC5_5_faulty.tla,apalache,1h,,SuccessOnCorrectPrimaryAndChainOfTrust,,--length=30
29,MC5_5_faulty.tla,apalache,1h,,NoFailedBlocksOnSuccessInv,,--length=30
30,MC5_5_faulty.tla,apalache,1h,,StoredHeadersAreVerifiedOrNotTrustedInv,,--length=30
31,MC5_5_faulty.tla,apalache,1h,,CorrectPrimaryAndTimeliness,,--length=30
32,MC5_5_faulty.tla,apalache,1h,,Complexity,,--length=30
33,MC7_5_faulty.tla,apalache,10h,,PositiveBeforeTrustedHeaderExpires,,--length=30
34,MC7_5_faulty.tla,apalache,10h,,CorrectnessInv,,--length=30
35,MC7_5_faulty.tla,apalache,10h,,PrecisionInv,,--length=30
36,MC7_5_faulty.tla,apalache,10h,,SuccessOnCorrectPrimaryAndChainOfTrust,,--length=30
37,MC7_5_faulty.tla,apalache,10h,,NoFailedBlocksOnSuccessInv,,--length=30
38,MC7_5_faulty.tla,apalache,10h,,StoredHeadersAreVerifiedOrNotTrustedInv,,--length=30
39,MC7_5_faulty.tla,apalache,10h,,CorrectPrimaryAndTimeliness,,--length=30
40,MC7_5_faulty.tla,apalache,10h,,Complexity,,--length=30
41,MC4_7_faulty.tla,apalache,10h,,PositiveBeforeTrustedHeaderExpires,,--length=30
42,MC4_7_faulty.tla,apalache,10h,,CorrectnessInv,,--length=30
43,MC4_7_faulty.tla,apalache,10h,,PrecisionInv,,--length=30
44,MC4_7_faulty.tla,apalache,10h,,SuccessOnCorrectPrimaryAndChainOfTrust,,--length=30
45,MC4_7_faulty.tla,apalache,10h,,NoFailedBlocksOnSuccessInv,,--length=30
46,MC4_7_faulty.tla,apalache,10h,,StoredHeadersAreVerifiedOrNotTrustedInv,,--length=30
47,MC4_7_faulty.tla,apalache,10h,,CorrectPrimaryAndTimeliness,,--length=30
48,MC4_7_faulty.tla,apalache,10h,,Complexity,,--length=30

+ 171
- 0
rust-spec/lightclient/verification/Blockchain_A_1.tla View File

@ -0,0 +1,171 @@
------------------------ MODULE Blockchain_A_1 -----------------------------
(*
This is a high-level specification of Tendermint blockchain
that is designed specifically for the light client.
Validators have the voting power of one. If you like to model various
voting powers, introduce multiple copies of the same validator
(do not forget to give them unique names though).
*)
EXTENDS Integers, FiniteSets
Min(a, b) == IF a < b THEN a ELSE b
CONSTANT
AllNodes,
(* a set of all nodes that can act as validators (correct and faulty) *)
ULTIMATE_HEIGHT,
(* a maximal height that can be ever reached (modelling artifact) *)
TRUSTING_PERIOD
(* the period within which the validators are trusted *)
Heights == 1..ULTIMATE_HEIGHT (* possible heights *)
(* A commit is just a set of nodes who have committed the block *)
Commits == SUBSET AllNodes
(* The set of all block headers that can be on the blockchain.
This is a simplified version of the Block data structure in the actual implementation. *)
BlockHeaders == [
height: Heights,
\* the block height
time: Int,
\* the block timestamp in some integer units
lastCommit: Commits,
\* the nodes who have voted on the previous block, the set itself instead of a hash
(* in the implementation, only the hashes of V and NextV are stored in a block,
as V and NextV are stored in the application state *)
VS: SUBSET AllNodes,
\* the validators of this bloc. We store the validators instead of the hash.
NextVS: SUBSET AllNodes
\* the validators of the next block. We store the next validators instead of the hash.
]
(* A signed header is just a header together with a set of commits *)
LightBlocks == [header: BlockHeaders, Commits: Commits]
VARIABLES
now,
(* the current global time in integer units *)
blockchain,
(* A sequence of BlockHeaders, which gives us a bird view of the blockchain. *)
Faulty
(* A set of faulty nodes, which can act as validators. We assume that the set
of faulty processes is non-decreasing. If a process has recovered, it should
connect using a different id. *)
(* all variables, to be used with UNCHANGED *)
vars == <<now, blockchain, Faulty>>
(* The set of all correct nodes in a state *)
Corr == AllNodes \ Faulty
(* APALACHE annotations *)
a <: b == a \* type annotation
NT == STRING
NodeSet(S) == S <: {NT}
EmptyNodeSet == NodeSet({})
BT == [height |-> Int, time |-> Int, lastCommit |-> {NT}, VS |-> {NT}, NextVS |-> {NT}]
LBT == [header |-> BT, Commits |-> {NT}]
(* end of APALACHE annotations *)
(****************************** BLOCKCHAIN ************************************)
(* the header is still within the trusting period *)
InTrustingPeriod(header) ==
now <= header.time + TRUSTING_PERIOD
(*
Given a function pVotingPower \in D -> Powers for some D \subseteq AllNodes
and pNodes \subseteq D, test whether the set pNodes \subseteq AllNodes has
more than 2/3 of voting power among the nodes in D.
*)
TwoThirds(pVS, pNodes) ==
LET TP == Cardinality(pVS)
SP == Cardinality(pVS \intersect pNodes)
IN
3 * SP > 2 * TP \* when thinking in real numbers, not integers: SP > 2.0 / 3.0 * TP
(*
Given a set of FaultyNodes, test whether the voting power of the correct nodes in D
is more than 2/3 of the voting power of the faulty nodes in D.
*)
IsCorrectPower(pFaultyNodes, pVS) ==
LET FN == pFaultyNodes \intersect pVS \* faulty nodes in pNodes
CN == pVS \ pFaultyNodes \* correct nodes in pNodes
CP == Cardinality(CN) \* power of the correct nodes
FP == Cardinality(FN) \* power of the faulty nodes
IN
\* CP + FP = TP is the total voting power, so we write CP > 2.0 / 3 * TP as follows:
CP > 2 * FP \* Note: when FP = 0, this implies CP > 0.
(* This is what we believe is the assumption about failures in Tendermint *)
FaultAssumption(pFaultyNodes, pNow, pBlockchain) ==
\A h \in Heights:
pBlockchain[h].time + TRUSTING_PERIOD > pNow =>
IsCorrectPower(pFaultyNodes, pBlockchain[h].NextVS)
(* Can a block be produced by a correct peer, or an authenticated Byzantine peer *)
IsLightBlockAllowedByDigitalSignatures(ht, block) ==
\/ block.header = blockchain[ht] \* signed by correct and faulty (maybe)
\/ block.Commits \subseteq Faulty /\ block.header.height = ht \* signed only by faulty
(*
Initialize the blockchain to the ultimate height right in the initial states.
We pick the faulty validators statically, but that should not affect the light client.
*)
InitToHeight ==
/\ Faulty \in SUBSET AllNodes \* some nodes may fail
\* pick the validator sets and last commits
/\ \E vs, lastCommit \in [Heights -> SUBSET AllNodes]:
\E timestamp \in [Heights -> Int]:
\* now is at least as early as the timestamp in the last block
/\ \E tm \in Int: now = tm /\ tm >= timestamp[ULTIMATE_HEIGHT]
\* the genesis starts on day 1
/\ timestamp[1] = 1
/\ vs[1] = AllNodes
/\ lastCommit[1] = EmptyNodeSet
/\ \A h \in Heights \ {1}:
/\ lastCommit[h] \subseteq vs[h - 1] \* the non-validators cannot commit
/\ TwoThirds(vs[h - 1], lastCommit[h]) \* the commit has >2/3 of validator votes
/\ IsCorrectPower(Faulty, vs[h]) \* the correct validators have >2/3 of power
/\ timestamp[h] > timestamp[h - 1] \* the time grows monotonically
/\ timestamp[h] < timestamp[h - 1] + TRUSTING_PERIOD \* but not too fast
\* form the block chain out of validator sets and commits (this makes apalache faster)
/\ blockchain = [h \in Heights |->
[height |-> h,
time |-> timestamp[h],
VS |-> vs[h],
NextVS |-> IF h < ULTIMATE_HEIGHT THEN vs[h + 1] ELSE AllNodes,
lastCommit |-> lastCommit[h]]
] \******
(* is the blockchain in the faulty zone where the Tendermint security model does not apply *)
InFaultyZone ==
~FaultAssumption(Faulty, now, blockchain)
(********************* BLOCKCHAIN ACTIONS ********************************)
(*
Advance the clock by zero or more time units.
*)
AdvanceTime ==
\E tm \in Int: tm >= now /\ now' = tm
/\ UNCHANGED <<blockchain, Faulty>>
(*
One more process fails. As a result, the blockchain may move into the faulty zone.
The light client is not using this action, as the faults are picked in the initial state.
However, this action may be useful when reasoning about fork detection.
*)
OneMoreFault ==
/\ \E n \in AllNodes \ Faulty:
/\ Faulty' = Faulty \cup {n}
/\ Faulty' /= AllNodes \* at least process remains non-faulty
/\ UNCHANGED <<now, blockchain>>
=============================================================================
\* Modification History
\* Last modified Wed Jun 10 14:10:54 CEST 2020 by igor
\* Created Fri Oct 11 15:45:11 CEST 2019 by igor

+ 440
- 0
rust-spec/lightclient/verification/Lightclient_A_1.tla View File

@ -0,0 +1,440 @@
-------------------------- MODULE Lightclient_A_1 ----------------------------
(**
* A state-machine specification of the lite client, following the English spec:
*
* https://github.com/informalsystems/tendermint-rs/blob/master/docs/spec/lightclient/verification.md
*)
EXTENDS Integers, FiniteSets
\* the parameters of Light Client
CONSTANTS
TRUSTED_HEIGHT,
(* an index of the block header that the light client trusts by social consensus *)
TARGET_HEIGHT,
(* an index of the block header that the light client tries to verify *)
TRUSTING_PERIOD,
(* the period within which the validators are trusted *)
IS_PRIMARY_CORRECT
(* is primary correct? *)
VARIABLES (* see TypeOK below for the variable types *)
state, (* the current state of the light client *)
nextHeight, (* the next height to explore by the light client *)
nprobes (* the lite client iteration, or the number of block tests *)
(* the light store *)
VARIABLES
fetchedLightBlocks, (* a function from heights to LightBlocks *)
lightBlockStatus, (* a function from heights to block statuses *)
latestVerified (* the latest verified block *)
(* the variables of the lite client *)
lcvars == <<state, nextHeight, fetchedLightBlocks, lightBlockStatus, latestVerified>>
(******************* Blockchain instance ***********************************)
\* the parameters that are propagated into Blockchain
CONSTANTS
AllNodes
(* a set of all nodes that can act as validators (correct and faulty) *)
\* the state variables of Blockchain, see Blockchain.tla for the details
VARIABLES now, blockchain, Faulty
\* All the variables of Blockchain. For some reason, BC!vars does not work
bcvars == <<now, blockchain, Faulty>>
(* Create an instance of Blockchain.
We could write EXTENDS Blockchain, but then all the constants and state variables
would be hidden inside the Blockchain module.
*)
ULTIMATE_HEIGHT == TARGET_HEIGHT + 1
BC == INSTANCE Blockchain_A_1 WITH
now <- now, blockchain <- blockchain, Faulty <- Faulty
(************************** Lite client ************************************)
(* the heights on which the light client is working *)
HEIGHTS == TRUSTED_HEIGHT..TARGET_HEIGHT
(* the control states of the lite client *)
States == { "working", "finishedSuccess", "finishedFailure" }
(**
Check the precondition of ValidAndVerified.
[LCV-FUNC-VALID.1::TLA-PRE.1]
*)
ValidAndVerifiedPre(trusted, untrusted) ==
LET thdr == trusted.header
uhdr == untrusted.header
IN
/\ BC!InTrustingPeriod(thdr)
/\ thdr.height < uhdr.height
\* the trusted block has been created earlier (no drift here)
/\ thdr.time <= uhdr.time
/\ untrusted.Commits \subseteq uhdr.VS
/\ LET TP == Cardinality(uhdr.VS)
SP == Cardinality(untrusted.Commits)
IN
3 * SP > 2 * TP
/\ thdr.height + 1 = uhdr.height => thdr.NextVS = uhdr.VS
(* As we do not have explicit hashes we ignore these three checks of the English spec:
1. "trusted.Commit is a commit is for the header trusted.Header,
i.e. it contains the correct hash of the header".
2. untrusted.Validators = hash(untrusted.Header.Validators)
3. untrusted.NextValidators = hash(untrusted.Header.NextValidators)
*)
(**
* Check that the commits in an untrusted block form 1/3 of the next validators
* in a trusted header.
*)
SignedByOneThirdOfTrusted(trusted, untrusted) ==
LET TP == Cardinality(trusted.header.NextVS)
SP == Cardinality(untrusted.Commits \intersect trusted.header.NextVS)
IN
3 * SP > TP
(**
Check, whether an untrusted block is valid and verifiable w.r.t. a trusted header.
[LCV-FUNC-VALID.1::TLA.1]
*)
ValidAndVerified(trusted, untrusted) ==
IF ~ValidAndVerifiedPre(trusted, untrusted)
THEN "FAILED_VERIFICATION"
ELSE IF ~BC!InTrustingPeriod(untrusted.header)
(* We leave the following test for the documentation purposes.
The implementation should do this test, as signature verification may be slow.
In the TLA+ specification, ValidAndVerified happens in no time.
*)
THEN "FAILED_TRUSTING_PERIOD"
ELSE IF untrusted.header.height = trusted.header.height + 1
\/ SignedByOneThirdOfTrusted(trusted, untrusted)
THEN "OK"
ELSE "CANNOT_VERIFY"
(*
Initial states of the light client.
Initially, only the trusted light block is present.
*)
LCInit ==
/\ state = "working"
/\ nextHeight = TARGET_HEIGHT
/\ nprobes = 0 \* no tests have been done so far
/\ LET trustedBlock == blockchain[TRUSTED_HEIGHT]
trustedLightBlock == [header |-> trustedBlock, Commits |-> AllNodes]
IN
\* initially, fetchedLightBlocks is a function of one element, i.e., TRUSTED_HEIGHT
/\ fetchedLightBlocks = [h \in {TRUSTED_HEIGHT} |-> trustedLightBlock]
\* initially, lightBlockStatus is a function of one element, i.e., TRUSTED_HEIGHT
/\ lightBlockStatus = [h \in {TRUSTED_HEIGHT} |-> "StateVerified"]
\* the latest verified block the the trusted block
/\ latestVerified = trustedLightBlock
\* block should contain a copy of the block from the reference chain, with a matching commit
CopyLightBlockFromChain(block, height) ==
LET ref == blockchain[height]
lastCommit ==
IF height < ULTIMATE_HEIGHT
THEN blockchain[height + 1].lastCommit
\* for the ultimate block, which we never use, as ULTIMATE_HEIGHT = TARGET_HEIGHT + 1
ELSE blockchain[height].VS
IN
block = [header |-> ref, Commits |-> lastCommit]
\* Either the primary is correct and the block comes from the reference chain,
\* or the block is produced by a faulty primary.
\*
\* [LCV-FUNC-FETCH.1::TLA.1]
FetchLightBlockInto(block, height) ==
IF IS_PRIMARY_CORRECT
THEN CopyLightBlockFromChain(block, height)
ELSE BC!IsLightBlockAllowedByDigitalSignatures(height, block)
\* add a block into the light store
\*
\* [LCV-FUNC-UPDATE.1::TLA.1]
LightStoreUpdateBlocks(lightBlocks, block) ==
LET ht == block.header.height IN
[h \in DOMAIN lightBlocks \union {ht} |->
IF h = ht THEN block ELSE lightBlocks[h]]
\* update the state of a light block
\*
\* [LCV-FUNC-UPDATE.1::TLA.1]
LightStoreUpdateStates(statuses, ht, blockState) ==
[h \in DOMAIN statuses \union {ht} |->
IF h = ht THEN blockState ELSE statuses[h]]
\* Check, whether newHeight is a possible next height for the light client.
\*
\* [LCV-FUNC-SCHEDULE.1::TLA.1]
CanScheduleTo(newHeight, pLatestVerified, pNextHeight, pTargetHeight) ==
LET ht == pLatestVerified.header.height IN
\/ /\ ht = pNextHeight
/\ ht < pTargetHeight
/\ pNextHeight < newHeight
/\ newHeight <= pTargetHeight
\/ /\ ht < pNextHeight
/\ ht < pTargetHeight
/\ ht < newHeight
/\ newHeight < pNextHeight
\/ /\ ht = pTargetHeight
/\ newHeight = pTargetHeight
\* The loop of VerifyToTarget.
\*
\* [LCV-FUNC-MAIN.1::TLA-LOOP.1]
VerifyToTargetLoop ==
\* the loop condition is true
/\ latestVerified.header.height < TARGET_HEIGHT
\* pick a light block, which will be constrained later
/\ \E current \in BC!LightBlocks:
\* Get next LightBlock for verification
/\ IF nextHeight \in DOMAIN fetchedLightBlocks
THEN \* copy the block from the light store
/\ current = fetchedLightBlocks[nextHeight]
/\ UNCHANGED fetchedLightBlocks
ELSE \* retrieve a light block and save it in the light store
/\ FetchLightBlockInto(current, nextHeight)
/\ fetchedLightBlocks' = LightStoreUpdateBlocks(fetchedLightBlocks, current)
\* Record that one more probe has been done (for complexity and model checking)
/\ nprobes' = nprobes + 1
\* Verify the current block
/\ LET verdict == ValidAndVerified(latestVerified, current) IN
\* Decide whether/how to continue
CASE verdict = "OK" ->
/\ lightBlockStatus' = LightStoreUpdateStates(lightBlockStatus, nextHeight, "StateVerified")
/\ latestVerified' = current
/\ state' =
IF latestVerified'.header.height < TARGET_HEIGHT
THEN "working"
ELSE "finishedSuccess"
/\ \E newHeight \in HEIGHTS:
/\ CanScheduleTo(newHeight, current, nextHeight, TARGET_HEIGHT)
/\ nextHeight' = newHeight
[] verdict = "CANNOT_VERIFY" ->
(*
do nothing: the light block current passed validation, but the validator
set is too different to verify it. We keep the state of
current at StateUnverified. For a later iteration, Schedule
might decide to try verification of that light block again.
*)
/\ lightBlockStatus' = LightStoreUpdateStates(lightBlockStatus, nextHeight, "StateUnverified")
/\ \E newHeight \in HEIGHTS:
/\ CanScheduleTo(newHeight, latestVerified, nextHeight, TARGET_HEIGHT)
/\ nextHeight' = newHeight
/\ UNCHANGED <<latestVerified, state>>
[] OTHER ->
\* verdict is some error code
/\ lightBlockStatus' = LightStoreUpdateStates(lightBlockStatus, nextHeight, "StateFailed")
/\ state' = "finishedFailure"
/\ UNCHANGED <<latestVerified, nextHeight>>
\* The terminating condition of VerifyToTarget.
\*
\* [LCV-FUNC-MAIN.1::TLA-LOOPCOND.1]
VerifyToTargetDone ==
/\ latestVerified.header.height >= TARGET_HEIGHT
/\ state' = "finishedSuccess"
/\ UNCHANGED <<nextHeight, nprobes, fetchedLightBlocks, lightBlockStatus, latestVerified>>
(********************* Lite client + Blockchain *******************)
Init ==
\* the blockchain is initialized immediately to the ULTIMATE_HEIGHT
/\ BC!InitToHeight
\* the light client starts
/\ LCInit
(*
The system step is very simple.
The light client is either executing VerifyToTarget, or it has terminated.
(In the latter case, a model checker reports a deadlock.)
Simultaneously, the global clock may advance.
*)
Next ==
/\ state = "working"
/\ VerifyToTargetLoop \/ VerifyToTargetDone
/\ BC!AdvanceTime \* the global clock is advanced by zero or more time units
(************************* Types ******************************************)
TypeOK ==
/\ state \in States
/\ nextHeight \in HEIGHTS
/\ latestVerified \in BC!LightBlocks
/\ \E HS \in SUBSET HEIGHTS:
/\ fetchedLightBlocks \in [HS -> BC!LightBlocks]
/\ lightBlockStatus
\in [HS -> {"StateVerified", "StateUnverified", "StateFailed"}]
(************************* Properties ******************************************)
(* The properties to check *)
\* this invariant candidate is false
NeverFinish ==
state = "working"
\* this invariant candidate is false
NeverFinishNegative ==
state /= "finishedFailure"
\* This invariant holds true, when the primary is correct.
\* This invariant candidate is false when the primary is faulty.
NeverFinishNegativeWhenTrusted ==
(*(minTrustedHeight <= TRUSTED_HEIGHT)*)
BC!InTrustingPeriod(blockchain[TRUSTED_HEIGHT])
=> state /= "finishedFailure"
\* this invariant candidate is false
NeverFinishPositive ==
state /= "finishedSuccess"
(**
Correctness states that all the obtained headers are exactly like in the blockchain.
It is always the case that every verified header in LightStore was generated by
an instance of Tendermint consensus.
[LCV-DIST-SAFE.1::CORRECTNESS-INV.1]
*)
CorrectnessInv ==
\A h \in DOMAIN fetchedLightBlocks:
lightBlockStatus[h] = "StateVerified" =>
fetchedLightBlocks[h].header = blockchain[h]
(**
Check that the sequence of the headers in storedLightBlocks satisfies ValidAndVerified = "OK" pairwise
This property is easily violated, whenever a header cannot be trusted anymore.
*)
StoredHeadersAreVerifiedInv ==
state = "finishedSuccess"
=>
\A lh, rh \in DOMAIN fetchedLightBlocks: \* for every pair of different stored headers
\/ lh >= rh
\* either there is a header between them
\/ \E mh \in DOMAIN fetchedLightBlocks:
lh < mh /\ mh < rh
\* or we can verify the right one using the left one
\/ "OK" = ValidAndVerified(fetchedLightBlocks[lh], fetchedLightBlocks[rh])
\* An improved version of StoredHeadersAreSound, assuming that a header may be not trusted.
\* This invariant candidate is also violated,
\* as there may be some unverified blocks left in the middle.
StoredHeadersAreVerifiedOrNotTrustedInv ==
state = "finishedSuccess"
=>
\A lh, rh \in DOMAIN fetchedLightBlocks: \* for every pair of different stored headers
\/ lh >= rh
\* either there is a header between them
\/ \E mh \in DOMAIN fetchedLightBlocks:
lh < mh /\ mh < rh
\* or we can verify the right one using the left one
\/ "OK" = ValidAndVerified(fetchedLightBlocks[lh], fetchedLightBlocks[rh])
\* or the left header is outside the trusting period, so no guarantees
\/ ~BC!InTrustingPeriod(fetchedLightBlocks[lh].header)
(**
* An improved version of StoredHeadersAreSoundOrNotTrusted,
* checking the property only for the verified headers.
* This invariant holds true.
*)
ProofOfChainOfTrustInv ==
state = "finishedSuccess"
=>
\A lh, rh \in DOMAIN fetchedLightBlocks:
\* for every pair of stored headers that have been verified
\/ lh >= rh
\/ lightBlockStatus[lh] = "StateUnverified"
\/ lightBlockStatus[rh] = "StateUnverified"
\* either there is a header between them
\/ \E mh \in DOMAIN fetchedLightBlocks:
lh < mh /\ mh < rh /\ lightBlockStatus[mh] = "StateVerified"
\* or the left header is outside the trusting period, so no guarantees
\/ ~(BC!InTrustingPeriod(fetchedLightBlocks[lh].header))
\* or we can verify the right one using the left one
\/ "OK" = ValidAndVerified(fetchedLightBlocks[lh], fetchedLightBlocks[rh])
(**
* When the light client terminates, there are no failed blocks. (Otherwise, someone lied to us.)
*)
NoFailedBlocksOnSuccessInv ==
state = "finishedSuccess" =>
\A h \in DOMAIN fetchedLightBlocks:
lightBlockStatus[h] /= "StateFailed"
\* This property states that whenever the light client finishes with a positive outcome,
\* the trusted header is still within the trusting period.
\* We expect this property to be violated. And Apalache shows us a counterexample.
PositiveBeforeTrustedHeaderExpires ==
(state = "finishedSuccess") => BC!InTrustingPeriod(blockchain[TRUSTED_HEIGHT])
\* If the primary is correct and the initial trusted block has not expired,
\* then whenever the algorithm terminates, it reports "success"
CorrectPrimaryAndTimeliness ==
(BC!InTrustingPeriod(blockchain[TRUSTED_HEIGHT])
/\ state /= "working" /\ IS_PRIMARY_CORRECT) =>
state = "finishedSuccess"
(**
If the primary is correct and there is a trusted block that has not expired,
then whenever the algorithm terminates, it reports "success".
[LCV-DIST-LIVE.1::SUCCESS-CORR-PRIMARY-CHAIN-OF-TRUST.1]
*)
SuccessOnCorrectPrimaryAndChainOfTrust ==
(\E h \in DOMAIN fetchedLightBlocks:
lightBlockStatus[h] = "StateVerified" /\ BC!InTrustingPeriod(blockchain[h])
/\ state /= "working" /\ IS_PRIMARY_CORRECT) =>
state = "finishedSuccess"
\* Lite Client Completeness: If header h was correctly generated by an instance
\* of Tendermint consensus (and its age is less than the trusting period),
\* then the lite client should eventually set trust(h) to true.
\*
\* Note that Completeness assumes that the lite client communicates with a correct full node.
\*
\* We decompose completeness into Termination (liveness) and Precision (safety).
\* Once again, Precision is an inverse version of the safety property in Completeness,
\* as A => B is logically equivalent to ~B => ~A.
PrecisionInv ==
(state = "finishedFailure")
=> \/ ~BC!InTrustingPeriod(blockchain[TRUSTED_HEIGHT]) \* outside of the trusting period
\/ \E h \in DOMAIN fetchedLightBlocks:
LET lightBlock == fetchedLightBlocks[h] IN
\* the full node lied to the lite client about the block header
\/ lightBlock.header /= blockchain[h]
\* the full node lied to the lite client about the commits
\/ lightBlock.Commits /= lightBlock.header.VS
\* the old invariant that was found to be buggy by TLC
PrecisionBuggyInv ==
(state = "finishedFailure")
=> \/ ~BC!InTrustingPeriod(blockchain[TRUSTED_HEIGHT]) \* outside of the trusting period
\/ \E h \in DOMAIN fetchedLightBlocks:
LET lightBlock == fetchedLightBlocks[h] IN
\* the full node lied to the lite client about the block header
lightBlock.header /= blockchain[h]
\* the worst complexity
Complexity ==
LET N == TARGET_HEIGHT - TRUSTED_HEIGHT + 1 IN
state /= "working" =>
(2 * nprobes <= N * (N - 1))
(*
We omit termination, as the algorithm deadlocks in the end.
So termination can be demonstrated by finding a deadlock.
Of course, one has to analyze the deadlocked state and see that
the algorithm has indeed terminated there.
*)
=============================================================================
\* Modification History
\* Last modified Fri Jun 26 12:08:28 CEST 2020 by igor
\* Created Wed Oct 02 16:39:42 CEST 2019 by igor

+ 15
- 0
rust-spec/lightclient/verification/MC4_3_correct.tla View File

@ -0,0 +1,15 @@
---------------------------- MODULE MC4_3_correct ---------------------------
AllNodes == {"n1", "n2", "n3", "n4"}
TRUSTED_HEIGHT == 1
TARGET_HEIGHT == 3
TRUSTING_PERIOD == 1400 \* two weeks, one day is 100 time units :-)
IS_PRIMARY_CORRECT == TRUE
VARIABLES
state, nextHeight, fetchedLightBlocks, lightBlockStatus, latestVerified,
nprobes,
now, blockchain, Faulty
INSTANCE Lightclient_A_1
==============================================================================

+ 15
- 0
rust-spec/lightclient/verification/MC4_3_faulty.tla View File

@ -0,0 +1,15 @@
---------------------------- MODULE MC4_3_faulty ---------------------------
AllNodes == {"n1", "n2", "n3", "n4"}
TRUSTED_HEIGHT == 1
TARGET_HEIGHT == 3
TRUSTING_PERIOD == 1400 \* two weeks, one day is 100 time units :-)
IS_PRIMARY_CORRECT == FALSE
VARIABLES
state, nextHeight, fetchedLightBlocks, lightBlockStatus, latestVerified,
nprobes,
now, blockchain, Faulty
INSTANCE Lightclient_A_1
==============================================================================

+ 15
- 0
rust-spec/lightclient/verification/MC4_4_faulty.tla View File

@ -0,0 +1,15 @@
---------------------------- MODULE MC4_4_faulty ---------------------------
AllNodes == {"n1", "n2", "n3", "n4"}
TRUSTED_HEIGHT == 1
TARGET_HEIGHT == 4
TRUSTING_PERIOD == 1400 \* two weeks, one day is 100 time units :-)
IS_PRIMARY_CORRECT == FALSE
VARIABLES
state, nextHeight, fetchedLightBlocks, lightBlockStatus, latestVerified,
nprobes,
now, blockchain, Faulty
INSTANCE Lightclient_A_1
==============================================================================

+ 15
- 0
rust-spec/lightclient/verification/MC4_5_correct.tla View File

@ -0,0 +1,15 @@
------------------------- MODULE MC4_5_correct ---------------------------
AllNodes == {"n1", "n2", "n3", "n4"}
TRUSTED_HEIGHT == 1
TARGET_HEIGHT == 5
TRUSTING_PERIOD == 1400 \* two weeks, one day is 100 time units :-)
IS_PRIMARY_CORRECT == TRUE
VARIABLES
state, nextHeight, fetchedLightBlocks, lightBlockStatus, latestVerified,
nprobes,
now, blockchain, Faulty
INSTANCE Lightclient_A_1
============================================================================

+ 15
- 0
rust-spec/lightclient/verification/MC4_5_faulty.tla View File

@ -0,0 +1,15 @@
------------------------- MODULE MC4_5_faulty ---------------------------
AllNodes == {"n1", "n2", "n3", "n4"}
TRUSTED_HEIGHT == 1
TARGET_HEIGHT == 5
TRUSTING_PERIOD == 1400 \* two weeks, one day is 100 time units :-)
IS_PRIMARY_CORRECT == FALSE
VARIABLES
state, nextHeight, nprobes, fetchedLightBlocks, lightBlockStatus,
latestVerified, nprobes,
now, blockchain, Faulty
INSTANCE Lightclient_A_1
============================================================================

+ 15
- 0
rust-spec/lightclient/verification/MC4_6_faulty.tla View File

@ -0,0 +1,15 @@
------------------------- MODULE MC4_6_faulty ---------------------------
AllNodes == {"n1", "n2", "n3", "n4"}
TRUSTED_HEIGHT == 1
TARGET_HEIGHT == 6
TRUSTING_PERIOD == 1400 \* two weeks, one day is 100 time units :-)
IS_PRIMARY_CORRECT == FALSE
VARIABLES
state, nextHeight, fetchedLightBlocks, lightBlockStatus, latestVerified,
nprobes,
now, blockchain, Faulty
INSTANCE Lightclient_A_1
============================================================================

+ 15
- 0
rust-spec/lightclient/verification/MC4_7_faulty.tla View File

@ -0,0 +1,15 @@
------------------------- MODULE MC4_7_faulty ---------------------------
AllNodes == {"n1", "n2", "n3", "n4"}
TRUSTED_HEIGHT == 1
TARGET_HEIGHT == 7
TRUSTING_PERIOD == 1400 \* two weeks, one day is 100 time units :-)
IS_PRIMARY_CORRECT == FALSE
VARIABLES
state, nextHeight, fetchedLightBlocks, lightBlockStatus, latestVerified,
nprobes,
now, blockchain, Faulty
INSTANCE Lightclient_A_1
============================================================================

+ 15
- 0
rust-spec/lightclient/verification/MC5_5_correct.tla View File

@ -0,0 +1,15 @@
------------------------- MODULE MC5_5_correct ---------------------------
AllNodes == {"n1", "n2", "n3", "n4", "n5"}
TRUSTED_HEIGHT == 1
TARGET_HEIGHT == 5
TRUSTING_PERIOD == 1400 \* two weeks, one day is 100 time units :-)
IS_PRIMARY_CORRECT == TRUE
VARIABLES
state, nextHeight, nprobes, fetchedLightBlocks, lightBlockStatus,
latestVerified, nprobes,
now, blockchain, Faulty
INSTANCE Lightclient_A_1
============================================================================

+ 15
- 0
rust-spec/lightclient/verification/MC5_5_faulty.tla View File

@ -0,0 +1,15 @@
------------------------- MODULE MC5_5_faulty ---------------------------
AllNodes == {"n1", "n2", "n3", "n4", "n5"}
TRUSTED_HEIGHT == 1
TARGET_HEIGHT == 5
TRUSTING_PERIOD == 1400 \* two weeks, one day is 100 time units :-)
IS_PRIMARY_CORRECT == FALSE
VARIABLES
state, nextHeight, nprobes, fetchedLightBlocks, lightBlockStatus,
latestVerified, nprobes,
now, blockchain, Faulty
INSTANCE Lightclient_A_1
============================================================================

+ 15
- 0
rust-spec/lightclient/verification/MC5_7_faulty.tla View File

@ -0,0 +1,15 @@
------------------------- MODULE MC5_7_faulty ---------------------------
AllNodes == {"n1", "n2", "n3", "n4", "n5"}
TRUSTED_HEIGHT == 1
TARGET_HEIGHT == 7
TRUSTING_PERIOD == 1400 \* two weeks, one day is 100 time units :-)
IS_PRIMARY_CORRECT == FALSE
VARIABLES
state, nextHeight, nprobes, fetchedLightBlocks, lightBlockStatus,
latestVerified, nprobes,
now, blockchain, Faulty
INSTANCE Lightclient_A_1
============================================================================

+ 15
- 0
rust-spec/lightclient/verification/MC7_5_faulty.tla View File

@ -0,0 +1,15 @@
------------------------- MODULE MC7_5_faulty ---------------------------
AllNodes == {"n1", "n2", "n3", "n4", "n5", "n6", "n7"}
TRUSTED_HEIGHT == 1
TARGET_HEIGHT == 5
TRUSTING_PERIOD == 1400 \* two weeks, one day is 100 time units :-)
IS_PRIMARY_CORRECT == FALSE
VARIABLES
state, nextHeight, nprobes, fetchedLightBlocks, lightBlockStatus,
latestVerified, nprobes,
now, blockchain, Faulty
INSTANCE Lightclient_A_1
============================================================================

+ 15
- 0
rust-spec/lightclient/verification/MC7_7_faulty.tla View File

@ -0,0 +1,15 @@
------------------------- MODULE MC7_7_faulty ---------------------------
AllNodes == {"n1", "n2", "n3", "n4", "n5", "n6", "n7"}
TRUSTED_HEIGHT == 1
TARGET_HEIGHT == 7
TRUSTING_PERIOD == 1400 \* two weeks, one day is 100 time units :-)
IS_PRIMARY_CORRECT == FALSE
VARIABLES
state, nextHeight, fetchedLightBlocks, lightBlockStatus, latestVerified,
nprobes,
now, blockchain, Faulty
INSTANCE Lightclient_A_1
============================================================================

+ 1162
- 0
rust-spec/lightclient/verification/verification.md
File diff suppressed because it is too large
View File


Loading…
Cancel
Save