## Description
Internalize some libs. This reduces the amount ot public API tendermint is supporting. The moved libraries are mainly ones that are used within Tendermint-core.
* add time warping lunatic attack test
* create too high and connecton refused errors and add to the light client provider
* add height check to provider
* introduce block lag
* add detection logic for processing forward lunatic attack
* add node-side verification logic
* clean up tests and formatting
* update adr's
* update testing
* fix fetching the latest block
* format
* update changelog
* implement suggestions
* modify ADR's
* format
* clean up node evidence verification
Introduces heuristics that track the amount of no responses or unavailable blocks a provider has for more robust provider handling by the light client. Use concurrent calls to all witnesses when a new primary is needed.
## Description
I'm just doing a self audit of the light client. There's a few things I've changed
- Validate trust level in `VerifyNonAdjacent` function
- Make errNoWitnesses public (it's something people running software on top of a light client should be able to parse)
- Remove `ChainID` check of witnesses on start up. We do this already when we compare the first header with witnesses
- Remove `ChainID()` from provider interface
Closes: #4538
## Description
This PR wraps the stdlib sync.(RW)Mutex & godeadlock.(RW)Mutex. This enables using go-deadlock via a build flag instead of using sed to replace sync with godeadlock in all files
Closes: #3242
Closes#4934
* light: do not compare trusted header w/ witnesses
we don't have trusted state to bisect from
* check header before checking height
otherwise you can get nil panic
Since the light client work introduced in v0.33 it appears full nodes
are no longer fully verifying commit signatures during block execution -
they stop after +2/3. See in VerifyCommit:
0c7fd316eb/types/validator_set.go (L700-L703)
This means proposers can propose blocks that contain valid +2/3
signatures and then the rest of the signatures can be whatever they
want. They can claim that all the other validators signed just by
including a CommitSig with arbitrary signature data. While this doesn't
seem to impact safety of Tendermint per se, it means that Commits may
contain a lot of invalid data. This is already true of blocks, since
they can include invalid txs filled with garbage, but in that case the
application knows they they are invalid and can punish the proposer. But
since applications dont verify commit signatures directly (they trust
tendermint to do that), they won't be able to detect it.
This can impact incentivization logic in the application that depends on
the LastCommitInfo sent in BeginBlock, which includes which validators
signed. For instance, Gaia incentivizes proposers with a bonus for
including more than +2/3 of the signatures. But a proposer can now claim
that bonus just by including arbitrary data for the final -1/3 of
validators without actually waiting for their signatures. There may be
other tricks that can be played because of this.
In general, the full node should be a fully verifying machine. While
it's true that the light client can avoid verifying all signatures by
stopping after +2/3, the full node can not. Thus the light client and
full node should use distinct VerifyCommit functions if one is going to
stop after +2/3 or otherwise perform less validation (for instance light
clients can also skip verifying votes for nil while full nodes can not).
See a commit with a bad signature that verifies here: 56367fd. From what
I can tell, Tendermint will go on to think this commit is valid and
forward this data to the app, so the app will think the second validator
actually signed when it clearly did not.
fix bug so that PotentialAmnesiaEvidence is being gossiped
handle inbound amnesia evidence correctly
add method to check if potential amnesia evidence is on trial
fix a bug with the height when we upgrade to amnesia evidence
change evidence to using just pointers.
More logging in the evidence module
Co-authored-by: Marko <marbar3778@yahoo.com>
* lite2: check header w/ witnesses only when doing bisection
Closes#4872
We don't need to check witnesses if we're doing backwards hash chain
verification. I also think we don't need to do it when sequential
verification is being used.
* lite2: require 1 witness only when verificationMode=skipping
https://github.com/tendermint/tendermint/pull/4929#pullrequestreview-423256477
we don't need witnesses when performing sequential verification (except
when primary fails)
Closes#4603
Commands used (VIM):
```
:args `rg -l errors.Wrap`
:argdo normal @q | update
```
where q is a macros rewriting the `errors.Wrap` to `fmt.Errorf`.
Closes#4783
It looks like we're validating Commit twice. Also, height and blockID params were coming from the commit, so no need to pass them separately.
Closes: #4530
This PR contains logic for both submitting an evidence by the light client (lite2 package) and receiving it on the Tendermint side (/broadcast_evidence RPC and/or EvidenceReactor#Receive). Upon receiving the ConflictingHeadersEvidence (introduced by this PR), the Tendermint validates it, then breaks it down into smaller pieces (DuplicateVoteEvidence, LunaticValidatorEvidence, PhantomValidatorEvidence, PotentialAmnesiaEvidence). Afterwards, each piece of evidence is verified against the state of the full node and added to the pool, from which it's reaped upon block creation.
* rpc/client: do not pass height param if height ptr is nil
* rpc/core: validate incoming evidence!
* only accept ConflictingHeadersEvidence if one
of the headers is committed from this full node's perspective
This simplifies the code. Plus, if there are multiple forks, we'll
likely to receive multiple ConflictingHeadersEvidence anyway.
* swap CommitSig with Vote in LunaticValidatorEvidence
Vote is needed to validate signature
* no need to embed client
http is a provider and should not be used as a client
Closes: #4537
Uses SignedHeaderBefore to find header before unverified header and then bisection to verify the header. Only when header is between first and last trusted header height else if before the first trusted header height then regular backwards verification is used.
Closes: #4546
The algorithm uses an array to store the headers and validators and populates it at every bisection (which is an unsuccessful verification). When a successful verification finally occurs it updates the new trusted header, trims that header from the cache (the array) and sets the depth pointer back to 0. Instead of retrieving new headers it will use the cached headers, incrementing in depth until it reaches the end of the cache which by then it will start to retrieve new headers from the provider.
Mathematically, this method doesn't properly bisect after the first round but it will always choose a pivot header that is within 1/8th of the upper header's height. I.e. if we are trying to jump 128 headers, the maximum offset from bisection height (64) is 64 + 16(128/8) = 80, therefore a better heuristic would be to obtain the new pivot header height as the middle of these two numbers which would therefore mean to multiply it by 9/16ths instead of 1/2 (sorry this might be a bit more complicated in writing but I can try better explain if someone is interested). Therefore I would also, upon consensus, propose that we change the pivot height to 9/16th's of the previous height
Closes: #4420
Created a new error ErrInvalidHeaderwhich can be formed during the verification process verifier.go and will result in the replacement of the primary provider with a witness by executing: replacePrimaryProvider()
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
error itself is not enough since it only signals if there were any
errors. Either (types.SignedHeader) or (success bool) is needed to
indicate the status of the operation. Returning a header is optimal
since most of the clients will want to get a newly verified header
anyway.
We first introduced auto-update as a separate struct AutoClient, which
was wrapping Client and calling Update periodically.
// AutoClient can auto update itself by fetching headers every N seconds.
type AutoClient struct {
base *Client
updatePeriod time.Duration
quit chan struct{}
trustedHeaders chan *types.SignedHeader
errs chan error
}
// NewAutoClient creates a new client and starts a polling goroutine.
func NewAutoClient(base *Client, updatePeriod time.Duration) *AutoClient {
c := &AutoClient{
base: base,
updatePeriod: updatePeriod,
quit: make(chan struct{}),
trustedHeaders: make(chan *types.SignedHeader),
errs: make(chan error),
}
go c.autoUpdate()
return c
}
// TrustedHeaders returns a channel onto which new trusted headers are posted.
func (c *AutoClient) TrustedHeaders() <-chan *types.SignedHeader {
return c.trustedHeaders
}
// Err returns a channel onto which errors are posted.
func (c *AutoClient) Errs() <-chan error {
return c.errs
}
// Stop stops the client.
func (c *AutoClient) Stop() {
close(c.quit)
}
func (c *AutoClient) autoUpdate() {
ticker := time.NewTicker(c.updatePeriod)
defer ticker.Stop()
for {
select {
case <-ticker.C:
lastTrustedHeight, err := c.base.LastTrustedHeight()
if err != nil {
c.errs <- err
continue
}
if lastTrustedHeight == -1 {
// no headers yet => wait
continue
}
newTrustedHeader, err := c.base.Update(time.Now())
if err != nil {
c.errs <- err
continue
}
if newTrustedHeader != nil {
c.trustedHeaders <- newTrustedHeader
}
case <-c.quit:
return
}
}
}
Later we merged it into the Client itself with the assumption that most clients will want it.
But now I am not sure. Neither IBC nor cosmos/relayer are using it. It increases complexity (Start/Stop methods).
That said, I think it makes sense to remove it until we see a need for it (until we better understand usage behavior). We can always introduce it later 😅. Maybe in the form of AutoClient.
* lite2: fix tendermint lite sub command
- better logging
- chainID as an argument
- more examples
* one more log msg
* lite2: fire update right away after start
* turn off auto update in verification tests
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
closes#4469
Improved speed of cleanup by using SignedHeaderAfter instead of TrustedHeader to jump from header to header.
Prune() is now called when a new header and validator set are saved and is a function dealt by the database itself
## Commits:
* prune headers and vals
* modified cleanup and tests
* fixes after my own review
* implement Prune func
* make db ops concurrently safe
* use Iterator in SignedHeaderAfter
we should iterate from height+1, not from the end!
* simplify cleanup
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
closes: #4455
Verifying backwards checks that the trustedHeader hasn't expired both before and after the loop in case of verifying many headers (a longer operation), but not during the loop itself.
TrustedHeader() no longer checks whether the header saved in the store has expired.
Tests have been updated to reflect the changes
## Commits:
* verify headers backwards out of trust period
* removed expiration check in trusted header func
* modified tests to reflect changes
* wrote new tests for backwards verification
* modified TrustedHeader and TrustedValSet functions
* condensed test functions
* condensed test functions further
* fix build error
* update doc
* add comments
* remove unnecessary declaration
* extract latestHeight check into a separate func
Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Before we were storing trustedHeader (height=1) and trustedNextVals
(height=2).
After this change, we will be storing trustedHeader (height=1) and
trustedVals (height=1). This a) simplifies the code b) fixes#4399
inconsistent pairing issue c) gives a relayer access to the current
validator set #4470.
The only downside is more jumps during bisection. If validator set
changes between trustedHeader and the next header (by 2/3 or more), the
light client will be forced to download the next header and check that
2/3+ signed the transition. But we don't expect validator set change too
much and too often, so it's an acceptable compromise.
Closes#4470 and #4399