- Update documentation to deprecate the old methods.
- Add Events methods to HTTP, WS, and Local clients.
- Add Events method to the light client wrapper.
- Rename legacy events client to SubscriptionClient.
* Rebased and git-squashed the commits in PR #6546
migrate abci to finalizeBlock
work on abci, proxy and mempool
abciresponse, blok events, indexer, some tests
fix some tests
fix errors
fix errors in abci
fix tests amd errors
* Fixes after rebasing PR#6546
* Restored height to RequestFinalizeBlock & other
* Fixed more UTs
* Fixed kvstore
* More UT fixes
* last TC fixed
* make format
* Update internal/consensus/mempool_test.go
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* Addressed @williambanfield's comments
* Fixed UTs
* Addressed last comments from @williambanfield
* make format
Co-authored-by: marbar3778 <marbar3778@yahoo.com>
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* rpc/client: remove the placeholder RunState type.
I added the RunState type in #6971 to disconnect clients from the service
plumbing, which they do not need. Now that we have more complete context
plumbing, the lifecycle of a client no longer depends on this type: It serves
as a carrier for a logger, and a Boolean flag for "running" status, neither of
which is used outside of tests.
Logging in particular is defaulted to a no-op logger in all production use.
Arguably we could just remove the logging calls, since they are never invoked
except in tests. To defer the question of whether we should do that or make the
logging go somewhere more productive, I've preserved the existing use here.
Remove use of the IsRunning method that was provided by the RunState, and use
the Start method and context to govern client lifecycle.
Remove the one test that exercised "unstarted" clients. I would like to remove
that method entirely, but that will require updating the constructors for all
the client types to plumb a context and possibly other options. I have deferred
that for now.
Responses are constructed from requests using MakeResponse, MakeError, and
MakeErrorf. This ensures the response is always paired with the correct ID,
makes cases where there is no ID more explicit at the usage site, and
consolidates the handling of error introspection across transports.
The logic for unpacking errors and assigning JSON-RPC response types was
previously duplicated in three places. Consolidate it in the types package for
the RPC subsystem.
* update test cases
*light: rpc /status returns status of light client ; code refactoring
light: moved lightClientInfo into light.go, renamed String to ID
test/e2e: Return light client trusted height instead of SyncInfo trusted height
test/e2e/start.go: Not waiting for light client to catch up in tests. Removed querying of syncInfo in start if the node is a light node
* light: Removed call to primary /status. Added trustedPeriod to light info
* light/provider: added ID function to return IP of primary and witnesses
* light/provider/http/http_test: renamed String() to ID()
This commit changes the behaviour of the /unconfirmed_txs endpoint by replacing limit with a page and perPage parameter for pagination.
The test case for unconfirmed_txs have been accommodated to properly test this change and the documentation for the API as well.
* Rename rpctypes.Context to CallInfo.
Add methods to attach and recover this value from a context.Context.
* Rework RPC method handlers to accept "real" contexts.
- Replace *rpctypes.Context arguments with context.Context.
- Update usage of RPC context fields to use CallInfo.
This continues the push of plumbing contexts through tendermint. I
attempted to find all goroutines in the production code (non-test) and
made sure that these threads would exit when their contexts were
canceled, and I believe this PR does that.
Addresses one of the concerns with #7041.
Provides a mechanism (via the RPC interface) to delete a single transaction, described by its hash, from the mempool. The method returns an error if the transaction cannot be found. Once the transaction is removed it remains in the cache and cannot be resubmitted until the cache is cleared or it expires from the cache.
The code in the Tendermint repository makes heavy use of import aliasing.
This is made necessary by our extensive reuse of common base package names, and
by repetition of similar names across different subdirectories.
Unfortunately we have not been very consistent about which packages we alias in
various circumstances, and the aliases we use vary. In the spirit of the advice
in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports,
his change makes an effort to clean up and normalize import aliasing.
This change makes no API or behavioral changes. It is a pure cleanup intended
o help make the code more readable to developers (including myself) trying to
understand what is being imported where.
Only unexported names have been modified, and the changes were generated and
applied mechanically with gofmt -r and comby, respecting the lexical and
syntactic rules of Go. Even so, I did not fix every inconsistency. Where the
changes would be too disruptive, I left it alone.
The principles I followed in this cleanup are:
- Remove aliases that restate the package name.
- Remove aliases where the base package name is unambiguous.
- Move overly-terse abbreviations from the import to the usage site.
- Fix lexical issues (remove underscores, remove capitalization).
- Fix import groupings to more closely match the style guide.
- Group blank (side-effecting) imports and ensure they are commented.
- Add aliases to multiple imports with the same base package name.
The responses from node RPCs encode hash values as hexadecimal strings. This
behaviour is stipulated in our OpenAPI documentation. In some cases, however,
hashes received as JSON parameters were being decoded as byte buffers, as is
the convention for JSON.
This resulted in the confusing situation that a hash reported by one request
(e.g., broadcast_tx_commit) could not be passed as a parameter to another
(e.g., tx) via JSON, without translating the hex-encoded output hash into the
base64 encoding used by JSON for opaque bytes.
Fixes#6802.
This change aims to keep versions of mockery consistent across developer laptops.
This change adds mockery to the `tools.go` file so that its version can be managed consistently in the `go.mod` file.
Additionally, this change temporarily disables adding mockery's version number to generated files. There is an outstanding issue against the mockery project related to the version string behavior when running from `go get`. I have created a pull request to fix this issue in the mockery project.
see: https://github.com/vektra/mockery/issues/397
## Description
This PR removes simple prefix from all types in the crypto/merkle directory.
The two proto types `Proof` & `ProofOp` have been moved to the `proto/crypto/merkle` directory.
proto messge `Proof` was renamed to `ProofOps` and `SimpleProof` message to `Proof`.
Closes: #2755
Migrates the `rpc` package to use new JSON encoder in #4955. Branched off of that PR.
Tests pass, but I haven't done any manual testing beyond that. This should be handled as part of broader 0.34 testing.
Closes#4837
- `/block_results`
before:
failed to update light client to 7: failed to obtain the header #7: signed header not found
after:
We can't return the latest block results because we won't be able to
prove them. Return the results for the previous block instead.
- /block_results?height=X`
no changes
Ethermint currently has to maintain a map height-> block hash on the store (see here) as it needs to expose the eth_getBlockByHash JSON-RPC query for Web3 compatibility. This query is currently not supported by the tendermint RPC client.
Closes: #4695
Verify /block_results and /validators responses from an HTTP client using the light client.
Added count and total to /validators response.
Refs #3113
______
For contributor use:
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [x] Re-reviewed `Files changed` in the Github PR explorer
We first introduced auto-update as a separate struct AutoClient, which
was wrapping Client and calling Update periodically.
// AutoClient can auto update itself by fetching headers every N seconds.
type AutoClient struct {
base *Client
updatePeriod time.Duration
quit chan struct{}
trustedHeaders chan *types.SignedHeader
errs chan error
}
// NewAutoClient creates a new client and starts a polling goroutine.
func NewAutoClient(base *Client, updatePeriod time.Duration) *AutoClient {
c := &AutoClient{
base: base,
updatePeriod: updatePeriod,
quit: make(chan struct{}),
trustedHeaders: make(chan *types.SignedHeader),
errs: make(chan error),
}
go c.autoUpdate()
return c
}
// TrustedHeaders returns a channel onto which new trusted headers are posted.
func (c *AutoClient) TrustedHeaders() <-chan *types.SignedHeader {
return c.trustedHeaders
}
// Err returns a channel onto which errors are posted.
func (c *AutoClient) Errs() <-chan error {
return c.errs
}
// Stop stops the client.
func (c *AutoClient) Stop() {
close(c.quit)
}
func (c *AutoClient) autoUpdate() {
ticker := time.NewTicker(c.updatePeriod)
defer ticker.Stop()
for {
select {
case <-ticker.C:
lastTrustedHeight, err := c.base.LastTrustedHeight()
if err != nil {
c.errs <- err
continue
}
if lastTrustedHeight == -1 {
// no headers yet => wait
continue
}
newTrustedHeader, err := c.base.Update(time.Now())
if err != nil {
c.errs <- err
continue
}
if newTrustedHeader != nil {
c.trustedHeaders <- newTrustedHeader
}
case <-c.quit:
return
}
}
}
Later we merged it into the Client itself with the assumption that most clients will want it.
But now I am not sure. Neither IBC nor cosmos/relayer are using it. It increases complexity (Start/Stop methods).
That said, I think it makes sense to remove it until we see a need for it (until we better understand usage behavior). We can always introduce it later 😅. Maybe in the form of AutoClient.
closes: #4455
Verifying backwards checks that the trustedHeader hasn't expired both before and after the loop in case of verifying many headers (a longer operation), but not during the loop itself.
TrustedHeader() no longer checks whether the header saved in the store has expired.
Tests have been updated to reflect the changes
## Commits:
* verify headers backwards out of trust period
* removed expiration check in trusted header func
* modified tests to reflect changes
* wrote new tests for backwards verification
* modified TrustedHeader and TrustedValSet functions
* condensed test functions
* condensed test functions further
* fix build error
* update doc
* add comments
* remove unnecessary declaration
* extract latestHeight check into a separate func
Co-authored-by: Callum Waters <cmwaters19@gmail.com>