There are no further uses of this package anywhere in Tendermint.
All the uses in the Cosmos SDK are for types that now work correctly with the
standard encoding/json package.
The main change here is to use encoding/json to encode and decode RPC
parameters, rather than the custom tmjson package. This includes:
- Update the HTTP POST handler parameter handling.
- Add field tags to 64-bit integer types to get string encoding (to match amino/tmjson).
- Add marshalers to struct types that mention interfaces.
- Inject wrappers to decode interface arguments in RPC handlers.
*light: rpc /status returns status of light client ; code refactoring
light: moved lightClientInfo into light.go, renamed String to ID
test/e2e: Return light client trusted height instead of SyncInfo trusted height
test/e2e/start.go: Not waiting for light client to catch up in tests. Removed querying of syncInfo in start if the node is a light node
* light: Removed call to primary /status. Added trustedPeriod to light info
* light/provider: added ID function to return IP of primary and witnesses
* light/provider/http/http_test: renamed String() to ID()
## What does this pull request do?
This pull requests adds two metrics intended for use in calculating an experimental value for `MessageDelay`.
The metrics are as follows:
```
# HELP tendermint_consensus_complete_prevote_message_delay Difference in seconds between the proposal timestamp and the timestamp of the prevote that achieved 100% of the voting power in the prevote step.
# TYPE tendermint_consensus_complete_prevote_message_delay gauge
tendermint_consensus_complete_prevote_message_delay{chain_id="test-chain-aZbwF1"} 0.013025505
# HELP tendermint_consensus_quorum_prevote_message_delay Difference in seconds between the proposal timestamp and the timestamp of the prevote that achieved a quorum in the prevote step.
# TYPE tendermint_consensus_quorum_prevote_message_delay gauge
tendermint_consensus_quorum_prevote_message_delay{chain_id="test-chain-aZbwF1"} 0.013025505
```
## Why this change?
For more information on what these metrics are calculating, see #7202. The aim is to merge to backport these metrics to v0.34 and run nodes on a few popular chains with these metrics to determine the experimental values for `MessageDelay` on these popular chains and use these to select our default `SynchronyParams.MessageDelay` value.
## Why Gauges for the metrics?
Gauges allow us to overwrite the metric on each successive observation. We can then capture these metrics over time to track the highest and lowest observed value.
Add package jsontypes that implements a subset of the custom libs/json
package. Specifically it handles encoding and decoding of interface types
wrapped in "tagged" JSON objects. It omits the deep reflection on arbitrary
types, preserving only the handling of type tags wrapper encoding.
- Register interface types (Evidence, PubKey, PrivKey) for tagged encoding.
- Update the existing implementations to satisfy the type.
- Register those types with the jsontypes registry.
- Add string tags to 64-bit integer fields where needed.
- Add marshalers to structs that export interface-typed fields.
Where possible, replace uses of the custom JSON library with the standard
library. The custom library treats interface and unnamed lteral types
differently, so this change avoids those even where it would probably be safe
to switch them.
These tests use a deterministic and unseeded random source to generate
non-colliding filenames for testing. When testing locally, this means tests are
not hermetic from one run to the next.
Use proper temp directories, and clean up after they're done.
Noticed in profiles that invoking *VoteSignBytes always created a
bytes.Buffer, then discarded it inside protoio.MarshalDelimited.
I dug further and examined the call paths and noticed that we
unconditionally create the bytes.Buffer, even though we might
have proto messages (in the common case) that implement
MarshalTo([]byte), and invoked varintWriter. Instead by inlining
this case, we skip a bunch of allocations and CPU cycles,
which then reflects properly on all calling functions. Here
are the benchmark results:
```shell
$ benchstat before.txt after.txt
name old time/op new time/op delta
types.VoteSignBytes-8 705ns ± 3% 573ns ± 6% -18.74% (p=0.000 n=18+20)
types.CommitVoteSignBytes-8 8.15µs ± 9% 6.81µs ± 4% -16.51% (p=0.000 n=20+19)
protoio.MarshalDelimitedWithMarshalTo-8 788ns ± 8% 772ns ± 3% -2.01% (p=0.050 n=20+20)
protoio.MarshalDelimitedNoMarshalTo-8 989ns ± 4% 845ns ± 2% -14.51% (p=0.000 n=20+18)
name old alloc/op new alloc/op delta
types.VoteSignBytes-8 792B ± 0% 600B ± 0% -24.24% (p=0.000 n=20+20)
types.CommitVoteSignBytes-8 9.52kB ± 0% 7.60kB ± 0% -20.17% (p=0.000 n=20+20)
protoio.MarshalDelimitedNoMarshalTo-8 808B ± 0% 440B ± 0% -45.54% (p=0.000 n=20+20)
name old allocs/op new allocs/op delta
types.VoteSignBytes-8 13.0 ± 0% 10.0 ± 0% -23.08% (p=0.000 n=20+20)
types.CommitVoteSignBytes-8 140 ± 0% 110 ± 0% -21.43% (p=0.000 n=20+20)
protoio.MarshalDelimitedNoMarshalTo-8 10.0 ± 0% 7.0 ± 0% -30.00% (p=0.000 n=20+20)
```
Thanks to Tharsis who tasked me to help them increase TPS and who
are keen on improving Tendermint and efficiency.
Rework the implementation of event query parsing and execution to
improve performance and reduce memory usage.
Previous memory and CPU profiles of the pubsub service showed query
processing as a significant hotspot. While we don't have evidence that
this is visibly hurting users, fixing it is fairly easy and self-contained.
Updates #6439.
Typical benchmark results comparing the original implementation (PEG) with the reworked implementation (Custom):
```
TEST TIME/OP BYTES/OP ALLOCS/OP SPEEDUP MEM SAVING
BenchmarkParsePEG-12 51716 ns 526832 27
BenchmarkParseCustom-12 2167 ns 4616 17 23.8x 99.1%
BenchmarkMatchPEG-12 3086 ns 1097 22
BenchmarkMatchCustom-12 294.2 ns 64 3 10.5x 94.1%
```
Components:
* Add a basic parsing benchmark.
* Move the original query implementation to a subdirectory.
* Add lexical scanner for Query expressions.
* Add a parser for Query expressions.
* Implement query compiler.
* Add test cases based on OpenAPI examples.
* Add MustCompile to replace the original MustParse, and update usage.
This is part of the work described by #7156.
Remove "unbuffered subscriptions" from the pubsub service.
Replace them with a dedicated blocking "observer" mechanism.
Use the observer mechanism for indexing.
Add a SubscribeWithArgs method and deprecate the old Subscribe
method. Remove SubscribeUnbuffered entirely (breaking).
Rework the Subscription interface to eliminate exposed channels.
Subscriptions now use a context to manage lifecycle notifications.
Internalize the eventbus package.
Addresses one of the concerns with #7041.
Provides a mechanism (via the RPC interface) to delete a single transaction, described by its hash, from the mempool. The method returns an error if the transaction cannot be found. Once the transaction is removed it remains in the cache and cannot be resubmitted until the cache is cleared or it expires from the cache.
The code in the Tendermint repository makes heavy use of import aliasing.
This is made necessary by our extensive reuse of common base package names, and
by repetition of similar names across different subdirectories.
Unfortunately we have not been very consistent about which packages we alias in
various circumstances, and the aliases we use vary. In the spirit of the advice
in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports,
his change makes an effort to clean up and normalize import aliasing.
This change makes no API or behavioral changes. It is a pure cleanup intended
o help make the code more readable to developers (including myself) trying to
understand what is being imported where.
Only unexported names have been modified, and the changes were generated and
applied mechanically with gofmt -r and comby, respecting the lexical and
syntactic rules of Go. Even so, I did not fix every inconsistency. Where the
changes would be too disruptive, I left it alone.
The principles I followed in this cleanup are:
- Remove aliases that restate the package name.
- Remove aliases where the base package name is unambiguous.
- Move overly-terse abbreviations from the import to the usage site.
- Fix lexical issues (remove underscores, remove capitalization).
- Fix import groupings to more closely match the style guide.
- Group blank (side-effecting) imports and ensure they are commented.
- Add aliases to multiple imports with the same base package name.
closes#2498
solves part of #3365
Note: difficult to test the event emit in SwitchToFastSync part, might need to change `stateSyncReactor` to an interface in the `nodeImpl` struct
## Description
Expose p2p functions for use in the sdk.
These functions could also be copied over to the sdk. I dont have a preference of which is better.
At Oasis we have spend some time writing a new Ed25519/X25519/sr25519 implementation called curve25519-voi. This PR switches the import from ed25519consensus/go-schnorrkel, which should lead to performance gains on most systems.
Summary of changes:
* curve25519-voi is now used for Ed25519 operations, following the existing ZIP-215 semantics.
* curve25519-voi's public key cache is enabled (hardcoded size of 4096 entries, should be tuned, see the code comment) to accelerate repeated Ed25519 verification with the same public key(s).
* (BREAKING) curve25519-voi is now used for sr25519 operations. This is a breaking change as the current sr25519 support does something decidedly non-standard when going from a MiniSecretKey to a SecretKey and or PublicKey (The expansion routine is called twice). While I believe the new behavior (that expands once and only once) to be more "correct", this changes the semantics as implemented.
* curve25519-voi is now used for merlin since the included STROBE implementation produces much less garbage on the heap.
Side issues fixed:
* The version of go-schnorrkel that is currently imported by tendermint has a badly broken batch verification implementation. Upstream has fixed the issue after I reported it, so the version should be bumped in the interim.
Open design questions/issues:
* As noted, the public key cache size should be tuned. It is currently backed by a trivial thread-safe LRU cache, which is not scan-resistant, but replacing it with something better is a matter of implementing an interface.
* As far as I can tell, the only reason why serial verification on batch failure is necessary is to provide more detailed error messages (that are only used in some unit tests). If you trust the batch verification to be consistent with serial verification then the fallback can be eliminated entirely (the BatchVerifier provided by the new library supports an option that omits the fallback if this is chosen as the way forward).
* curve25519-voi's sr25519 support could use more optimization and more eyes on the code. The algorithm unfortunately is woefully under-specified, and the implementation was done primarily because I got really sad when I actually looked at go-schnorrkel, and we do not use the algorithm at this time.
## Description
Trying to debug a possible hashing issue, writing test vectors on 0.34 and then porting them to master to double-check it's not a hashing issue.