You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

318 lines
16 KiB

  1. # ADR 053: State Sync Prototype
  2. This ADR outlines the plan for an initial state sync prototype, and is subject to change as we gain feedback and experience. It builds on discussions and findings in [ADR-042](./adr-042-state-sync.md), see that for background information.
  3. ## Changelog
  4. * 2020-01-28: Initial draft (Erik Grinaker)
  5. * 2020-02-18: Updates after initial prototype (Erik Grinaker)
  6. * ABCI: added missing `reason` fields.
  7. * ABCI: used 32-bit 1-based chunk indexes (was 64-bit 0-based).
  8. * ABCI: moved `RequestApplySnapshotChunk.chain_hash` to `RequestOfferSnapshot.app_hash`.
  9. * Gaia: snapshots must include node versions as well, both for inner and leaf nodes.
  10. * Added experimental prototype info.
  11. * Added open questions and implementation plan.
  12. ## Context
  13. State sync will allow a new node to receive a snapshot of the application state without downloading blocks or going through consensus. This bootstraps the node significantly faster than the current fast sync system, which replays all historical blocks.
  14. Background discussions and justifications are detailed in [ADR-042](./adr-042-state-sync.md). Its recommendations can be summarized as:
  15. * The application periodically takes full state snapshots (i.e. eager snapshots).
  16. * The application splits snapshots into smaller chunks that can be individually verified against a chain app hash.
  17. * Tendermint uses the light client to obtain a trusted chain app hash for verification.
  18. * Tendermint discovers and downloads snapshot chunks in parallel from multiple peers, and passes them to the application via ABCI to be applied and verified against the chain app hash.
  19. * Historical blocks are not backfilled, so state synced nodes will have a truncated block history.
  20. ## Tendermint Proposal
  21. This describes the snapshot/restore process seen from Tendermint. The interface is kept as small and general as possible to give applications maximum flexibility.
  22. ### Snapshot Data Structure
  23. A node can have multiple snapshots taken at various heights. Snapshots can be taken in different application-specified formats (e.g. MessagePack as format `1` and Protobuf as format `2`, or similarly for schema versioning). Each snapshot consists of multiple chunks containing the actual state data, allowing parallel downloads and reduced memory usage.
  24. ```proto
  25. message Snapshot {
  26. uint64 height = 1; // The height at which the snapshot was taken
  27. uint32 format = 2; // The application-specific snapshot format
  28. uint32 chunks = 3; // The number of chunks in the snapshot
  29. bytes metadata = 4; // Arbitrary application metadata
  30. }
  31. message SnapshotChunk {
  32. uint64 height = 1; // The height of the corresponding snapshot
  33. uint32 format = 2; // The application-specific snapshot format
  34. uint32 chunk = 3; // The chunk index (one-based)
  35. bytes data = 4; // Serialized application state in an arbitrary format
  36. bytes checksum = 5; // SHA-1 checksum of data
  37. }
  38. ```
  39. Chunk verification data must be encoded along with the state data in the `data` field.
  40. Chunk `data` cannot be larger than 64 MB, and snapshot `metadata` cannot be larger than 64 KB.
  41. ### ABCI Interface
  42. ```proto
  43. // Lists available snapshots
  44. message RequestListSnapshots {}
  45. message ResponseListSnapshots {
  46. repeated Snapshot snapshots = 1;
  47. }
  48. // Offers a snapshot to the application
  49. message RequestOfferSnapshot {
  50. Snapshot snapshot = 1;
  51. bytes app_hash = 2;
  52. }
  53. message ResponseOfferSnapshot {
  54. bool accepted = 1;
  55. Reason reason = 2; // Reason why snapshot was rejected
  56. enum Reason {
  57. unknown = 0; // Unknown or generic reason
  58. invalid_height = 1; // Height is rejected: avoid this height
  59. invalid_format = 2; // Format is rejected: avoid this format
  60. }
  61. }
  62. // Fetches a snapshot chunk
  63. message RequestGetSnapshotChunk {
  64. uint64 height = 1;
  65. uint32 format = 2;
  66. uint32 chunk = 3;
  67. }
  68. message ResponseGetSnapshotChunk {
  69. SnapshotChunk chunk = 1;
  70. }
  71. // Applies a snapshot chunk
  72. message RequestApplySnapshotChunk {
  73. SnapshotChunk chunk = 1;
  74. }
  75. message ResponseApplySnapshotChunk {
  76. bool applied = 1;
  77. Reason reason = 2; // Reason why chunk failed
  78. enum Reason {
  79. unknown = 0; // Unknown or generic reason
  80. verify_failed = 1; // Chunk verification failed
  81. }
  82. }
  83. ```
  84. ### Taking Snapshots
  85. Tendermint is not aware of the snapshotting process at all, it is entirely an application concern. The following guarantees must be provided:
  86. * **Periodic:** snapshots must be taken periodically, not on-demand, for faster restores, lower load, and less DoS risk.
  87. * **Deterministic:** snapshots must be deterministic, and identical across all nodes - typically by taking a snapshot at given height intervals.
  88. * **Consistent:** snapshots must be consistent, i.e. not affected by concurrent writes - typically by using a data store that supports versioning and/or snapshot isolation.
  89. * **Asynchronous:** snapshots must be asynchronous, i.e. not halt block processing and state transitions.
  90. * **Chunked:** snapshots must be split into chunks of reasonable size (on the order of megabytes), and each chunk must be verifiable against the chain app hash.
  91. * **Garbage collected:** snapshots must be garbage collected periodically.
  92. ### Restoring Snapshots
  93. Nodes should have options for enabling state sync and/or fast sync, and be provided a trusted header hash for the light client.
  94. When starting an empty node with state sync and fast sync enabled, snapshots are restored as follows:
  95. 1. The node checks that it is empty, i.e. that it has no state nor blocks.
  96. 2. The node contacts the given seeds to discover peers.
  97. 3. The node contacts a set of full nodes, and verifies the trusted block header using the given hash via the light client.
  98. 4. The node requests available snapshots via `RequestListSnapshots`. Snapshots with `metadata` greater than 64 KB are rejected.
  99. 5. The node iterates over all snapshots in reverse order by height and format until it finds one that satisfies all of the following conditions:
  100. * The snapshot height's block is considered trustworthy by the light client (i.e. snapshot height is greater than trusted header and within unbonding period of the latest trustworthy block).
  101. * The snapshot's height or format hasn't been explicitly rejected by an earlier `RequestOffsetSnapshot` call (via `invalid_height` or `invalid_format`).
  102. * The application accepts the `RequestOfferSnapshot` call.
  103. 6. The node downloads chunks in parallel from multiple peers via `RequestGetSnapshotChunk`, and both the sender and receiver verifies their checksums. Chunks with `data` greater than 64 MB are rejected.
  104. 7. The node passes chunks sequentially to the app via `RequestApplySnapshotChunk`, along with the chain's app hash at the snapshot height for verification. If the chunk is rejected the node should retry it. If it was rejected with `verify_failed`, it should be refetched from a different source. If an internal error occurred, `ResponseException` should be returned and state sync should be aborted.
  105. 8. Once all chunks have been applied, the node compares the app hash to the chain app hash, and if they do not match it either errors or discards the state and starts over.
  106. 9. The node switches to fast sync to catch up blocks that were committed while restoring the snapshot.
  107. 10. The node switches to normal consensus mode.
  108. ## Gaia Proposal
  109. This describes the snapshot process seen from Gaia, using format version `1`. The serialization format is unspecified, but likely to be compressed Amino or Protobuf.
  110. ### Snapshot Metadata
  111. In the initial version there is no snapshot metadata, so it is set to an empty byte buffer.
  112. Once all chunks have been successfully built, snapshot metadata should be serialized and stored in the file system as e.g. `snapshots/<height>/<format>/metadata`, and served via `RequestListSnapshots`.
  113. ### Snapshot Chunk Format
  114. The Gaia data structure consists of a set of named IAVL trees. A root hash is constructed by taking the root hashes of each of the IAVL trees, then constructing a Merkle tree of the sorted name/hash map.
  115. IAVL trees are versioned, but a snapshot only contains the version relevant for the snapshot height. All historical versions are ignored.
  116. IAVL trees are insertion-order dependent, so key/value pairs must be set in an appropriate insertion order to produce the same tree branching structure. This insertion order can be found by doing a breadth-first scan of all nodes (including inner nodes) and collecting unique keys in order. However, the node hash also depends on the node's version, so snapshots must contain the inner nodes' version numbers as well.
  117. For the initial prototype, each chunk consists of a complete dump of all node data for all nodes in an entire IAVL tree. Thus the number of chunks equals the number of persistent stores in Gaia. No incremental verification of chunks is done, only a final app hash comparison at the end of the snapshot restoration.
  118. For a production version, it should be sufficient to store key/value/version for all nodes (leaf and inner) in insertion order, chunked in some appropriate way. If per-chunk verification is required, the chunk must also contain enough information to reconstruct the Merkle proofs all the way up to the root of the multistore, e.g. by storing a complete subtree's key/value/version data plus Merkle hashes of all other branches up to the multistore root. The exact approach will depend on tradeoffs between size, time, and verification. IAVL RangeProofs are not recommended, since these include redundant data such as proofs for intermediate and leaf nodes that can be derived from the above data.
  119. Chunks should be built greedily by collecting node data up to some size limit (e.g. 32 MB) and serializing it. Chunk data is stored in the file system as `snapshots/<height>/<format>/<chunk>/data`, along with a SHA-1 checksum in `snapshots/<height>/<format>/<chunk>/checksum`, and served via `RequestGetSnapshotChunk`.
  120. ### Snapshot Scheduling
  121. Snapshots should be taken at some configurable height interval, e.g. every 1000 blocks. All nodes should preferably have the same snapshot schedule, such that all nodes can serve chunks for a given snapshot.
  122. Taking consistent snapshots of IAVL trees is greatly simplified by them being versioned: simply snapshot the version that corresponds to the snapshot height, while concurrent writes create new versions. IAVL pruning must not prune a version that is being snapshotted.
  123. Snapshots must also be garbage collected after some configurable time, e.g. by keeping the latest `n` snapshots.
  124. ## Experimental Prototype
  125. An experimental but functional state sync prototype is available in the `erik/statesync-prototype` branches of the Tendermint, IAVL, Cosmos SDK, and Gaia repositories. To fetch the necessary branches:
  126. ```sh
  127. $ mkdir statesync
  128. $ cd statesync
  129. $ git clone git@github.com:tendermint/tendermint -b erik/statesync-prototype
  130. $ git clone git@github.com:tendermint/iavl -b erik/statesync-prototype
  131. $ git clone git@github.com:cosmos/cosmos-sdk -b erik/statesync-prototype
  132. $ git clone git@github.com:cosmos/gaia -b erik/statesync-prototype
  133. ```
  134. To spin up three nodes of a four-node testnet:
  135. ```sh
  136. $ cd gaia
  137. $ ./tools/start.sh
  138. ```
  139. Wait for the first snapshot to be taken at height 3, then (in a separate terminal) start the fourth node with state sync enabled:
  140. ```sh
  141. $ ./tools/sync.sh
  142. ```
  143. To stop the testnet, run:
  144. ```sh
  145. $ ./tools/stop.sh
  146. ```
  147. ## Open Questions
  148. * Should we have a simpler scheme for discovering snapshots? E.g. announce supported formats, and have peer supply latest available snapshot.
  149. Downsides: app has to announce supported formats, having a single snapshot per peer may make fewer peers available for chosen snapshot.
  150. ## Resolved Questions
  151. * Is it OK for state-synced nodes to not have historical blocks nor historical IAVL versions?
  152. > Yes, this is as intended. Maybe backfill blocks later.
  153. * Do we need incremental chunk verification for first version?
  154. > No, we'll start simple. Can add chunk verification via a new snapshot format without any breaking changes in Tendermint. For adversarial conditions, maybe consider support for whitelisting peers to download chunks from.
  155. * Should the snapshot ABCI interface be a separate optional ABCI service, or mandatory?
  156. > Mandatory, to keep things simple for now. It will therefore be a breaking change and push the release. For apps using the Cosmos SDK, we can provide a default implementation that does not serve snapshots and errors when trying to apply them.
  157. * How can we make sure `ListSnapshots` data is valid? An adversary can provide fake/invalid snapshots to DoS peers.
  158. > For now, just pick snapshots that are available on a large number of peers. Maybe support whitelisting. We may consider e.g. placing snapshot manifests on the blockchain later.
  159. * Should we punish nodes that provide invalid snapshots? How?
  160. > No, these are full nodes not validators, so we can't punish them. Just disconnect from them and ignore them.
  161. * Should we call these snapshots? The SDK already uses the term "snapshot" for `PruningOptions.SnapshotEvery`, and state sync will introduce additional SDK options for snapshot scheduling and pruning that are not related to IAVL snapshotting or pruning.
  162. > Yes. Hopefully these concepts are distinct enough that we can refer to state sync snapshots and IAVL snapshots without too much confusion.
  163. * Should we store snapshot and chunk metadata in a database? Can we use the database for chunks?
  164. > As a first approach, store metadata in a database and chunks in the filesystem.
  165. * Should a snapshot at height H be taken before or after the block at H is processed? E.g. RPC `/commit` returns app_hash after _previous_ height, i.e. _before_ current height.
  166. > After commit.
  167. * Do we need to support all versions of blockchain reactor (i.e. fast sync)?
  168. > We should remove the v1 reactor completely once v2 has stabilized.
  169. * Should `ListSnapshots` be a streaming API instead of a request/response API?
  170. > No, just use a max message size.
  171. ## Implementation Plan
  172. ### Core Tasks
  173. * **Tendermint:** light client P2P transport [#4456](https://github.com/tendermint/tendermint/issues/4456)
  174. * **IAVL:** export/import API [#210](https://github.com/tendermint/iavl/issues/210)
  175. * **Cosmos SDK:** snapshotting, scheduling, and pruning [#5689](https://github.com/cosmos/cosmos-sdk/issues/5689)
  176. * **Tendermint:** support starting with a truncated block history
  177. * **Tendermint:** state sync reactor and ABCI interface [#828](https://github.com/tendermint/tendermint/issues/828)
  178. * **Cosmos SDK:** snapshot ABCI implementation [#5690](https://github.com/cosmos/cosmos-sdk/issues/5690)
  179. ### Nice-to-Haves
  180. * **Tendermint:** staged reactor startup (state sync → fast sync → block replay → wal replay → consensus)
  181. > Let's do a time-boxed prototype (a few days) and see how much work it will be.
  182. * Notify P2P peers about channel changes [#4394](https://github.com/tendermint/tendermint/issues/4394)
  183. * Check peers have certain channels [#1148](https://github.com/tendermint/tendermint/issues/1148)
  184. * **Tendermint:** prune blockchain history [#3652](https://github.com/tendermint/tendermint/issues/3652)
  185. * **Tendermint:** allow genesis to start from non-zero height [#2543](https://github.com/tendermint/tendermint/issues/2543)
  186. ### Follow-up Tasks
  187. * **Tendermint:** light client verification for fast sync [#4457](https://github.com/tendermint/tendermint/issues/4457)
  188. * **Tendermint:** allow start with only blockstore [#3713](https://github.com/tendermint/tendermint/issues/3713)
  189. * **Tendermint:** node should go back to fast-syncing when lagging significantly [#129](https://github.com/tendermint/tendermint/issues/129)
  190. ## Status
  191. Accepted
  192. ## References
  193. * [ADR-042](./adr-042-state-sync.md) and its references