During file rotation and WAL shutdown, there was a race condition between users
of an autofile and its termination. To fix this, ensure operations on an
autofile are properly synchronized, and report errors when attempting to use an
autofile after it was closed.
Notably:
- Simplify the cancellation protocol between signal and Close.
- Exclude writers to an autofile during rotation.
- Add documentation about what is going on.
There is a lot more that could be improved here, but this addresses the more
obvious races that have been panicking unit tests.
## What does this pull request do?
This pull requests adds two metrics intended for use in calculating an experimental value for `MessageDelay`.
The metrics are as follows:
```
# HELP tendermint_consensus_complete_prevote_message_delay Difference in seconds between the proposal timestamp and the timestamp of the prevote that achieved 100% of the voting power in the prevote step.
# TYPE tendermint_consensus_complete_prevote_message_delay gauge
tendermint_consensus_complete_prevote_message_delay{chain_id="test-chain-aZbwF1"} 0.013025505
# HELP tendermint_consensus_quorum_prevote_message_delay Difference in seconds between the proposal timestamp and the timestamp of the prevote that achieved a quorum in the prevote step.
# TYPE tendermint_consensus_quorum_prevote_message_delay gauge
tendermint_consensus_quorum_prevote_message_delay{chain_id="test-chain-aZbwF1"} 0.013025505
```
## Why this change?
For more information on what these metrics are calculating, see #7202. The aim is to merge to backport these metrics to v0.34 and run nodes on a few popular chains with these metrics to determine the experimental values for `MessageDelay` on these popular chains and use these to select our default `SynchronyParams.MessageDelay` value.
## Why Gauges for the metrics?
Gauges allow us to overwrite the metric on each successive observation. We can then capture these metrics over time to track the highest and lowest observed value.
This commit changes the behaviour of the /unconfirmed_txs endpoint by replacing limit with a page and perPage parameter for pagination.
The test case for unconfirmed_txs have been accommodated to properly test this change and the documentation for the API as well.
The custom error types in the provider package did not propagate their wrapped
underlying reasons, making it difficult for the test to check that the correct
error was observed.
- Fix the custom errors to have a true underlying error (not just a string).
- Add Unwrap methods to support inspection by errors.Is.
- Update usage in a few places.
- Fix the test to check for acceptable variation.
Fixes#7609.
After writing and then reading a bunch of random messages, the test was
checking that it did not read the same number of messages that it wrote.
The sense of this check was inverted; they should match.
Introduced by accident in #7522. I'm not sure why this did not show up in CI.
Edit: I now know why it didn't show up in ci: #7608.
Add package jsontypes that implements a subset of the custom libs/json
package. Specifically it handles encoding and decoding of interface types
wrapped in "tagged" JSON objects. It omits the deep reflection on arbitrary
types, preserving only the handling of type tags wrapper encoding.
- Register interface types (Evidence, PubKey, PrivKey) for tagged encoding.
- Update the existing implementations to satisfy the type.
- Register those types with the jsontypes registry.
- Add string tags to 64-bit integer fields where needed.
- Add marshalers to structs that export interface-typed fields.
Instead of taking a comma-separated string of parameter names, take each
parameter name as a separate argument. Now that we no longer have an extra flag
for caching, this fits nicely into a variadic trailer.
* Update all usage of NewRPCFunc and NewWSRPCFunc.
Define interfaces for the various methods a service may implement. This is
basically just the set of things on Environment that are exported as RPCs, but
these are also implemented by the light proxy.
* internal/rpc: use NewRoutesMap to construct routes on service start
* light/proxy: use NewRoutesMap to construct RPC routes
We should not set cache-control headers on RPC responses. HTTP caching
interacts poorly with resources that are expected to change frequently, or
whose rate of change is unpredictable.
More subtly, all calls to the POST endpoint use the same URL, which means a
cacheable response from one call may actually "hide" an uncacheable response
from a subsequent one. This is less of a problem for the GET endpoints, but
that means the behaviour of RPCs varies depending on which HTTP method your
client happens to use. Websocket requests were already marked statically
uncacheable, adding yet a third combination.
To address this:
- Stop setting cache-control headers.
- Update the tests that were checking for those headers.
- Remove the flags to request cache-control.
Apart from affecting the HTTP response headers, this change does not modify the
behaviour of any of the RPC methods.
* Rename rpctypes.Context to CallInfo.
Add methods to attach and recover this value from a context.Context.
* Rework RPC method handlers to accept "real" contexts.
- Replace *rpctypes.Context arguments with context.Context.
- Update usage of RPC context fields to use CallInfo.
Where possible, replace uses of the custom JSON library with the standard
library. The custom library treats interface and unnamed lteral types
differently, so this change avoids those even where it would probably be safe
to switch them.