You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

283 lines
16 KiB

  1. # RFC 003: Taxonomy of potential performance issues in Tendermint
  2. ## Changelog
  3. - 2021-09-02: Created initial draft (@wbanfield)
  4. - 2021-09-14: Add discussion of the event system (@wbanfield)
  5. ## Abstract
  6. This document discusses the various sources of performance issues in Tendermint and
  7. attempts to clarify what work may be required to understand and address them.
  8. ## Background
  9. Performance, loosely defined as the ability of a software process to perform its work
  10. quickly and efficiently under load and within reasonable resource limits, is a frequent
  11. topic of discussion in the Tendermint project.
  12. To effectively address any issues with Tendermint performance we need to
  13. categorize the various issues, understand their potential sources, and gauge their
  14. impact on users.
  15. Categorizing the different known performance issues will allow us to discuss and fix them
  16. more systematically. This document proposes a rough taxonomy of performance issues
  17. and highlights areas where more research into potential performance problems is required.
  18. Understanding Tendermint's performance limitations will also be critically important
  19. as we make changes to many of its subsystems. Performance is a central concern for
  20. upcoming decisions regarding the `p2p` protocol, RPC message encoding and structure,
  21. database usage and selection, and consensus protocol updates.
  22. ## Discussion
  23. This section attempts to delineate the different sections of Tendermint functionality
  24. that are often cited as having performance issues. It raises questions and suggests
  25. lines of inquiry that may be valuable for better understanding Tendermint's performance issues.
  26. As a note: We should avoid quickly adding many microbenchmarks or package level benchmarks.
  27. These are prone to being worse than useless as they can obscure what _should_ be
  28. focused on: performance of the system from the perspective of a user. We should,
  29. instead, tune performance with an eye towards user needs and actions users make. These users comprise
  30. both operators of Tendermint chains and the people generating transactions for
  31. Tendermint chains. Both of these sets of users are largely aligned in wanting an end-to-end
  32. system that operates quickly and efficiently.
  33. REQUEST: The list below may be incomplete, if there are additional sections that are often
  34. cited as creating poor performance, please comment so that they may be included.
  35. ### P2P
  36. #### Claim: Tendermint cannot scale to large numbers of nodes
  37. A complaint has been reported that Tendermint networks cannot scale to large numbers of nodes.
  38. The listed number of nodes a user reported as causing issue was in the thousands.
  39. We don't currently have evidence about what the upper-limit of nodes that Tendermint's
  40. P2P stack can scale to.
  41. We need to more concretely understand the source of issues and determine what layer
  42. is causing a problem. It's possible that the P2P layer, in the absence of any reactors
  43. sending data, is perfectly capable of managing thousands of peer connections. For
  44. a reasonable networking and application setup, thousands of connections should not present any
  45. issue for the application.
  46. We need more data to understand the problem directly. We want to drive the popularity
  47. and adoption of Tendermint and this will mean allowing for chains with more validators.
  48. We should follow up with users experiencing this issue. We may then want to add
  49. a series of metrics to the P2P layer to better understand the inefficiencies it produces.
  50. The following metrics can help us understand the sources of latency in the Tendermint P2P stack:
  51. * Number of messages sent and received per second
  52. * Time of a message spent on the P2P layer send and receive queues
  53. The following metrics exist and should be leveraged in addition to those added:
  54. * Number of peers node's connected to
  55. * Number of bytes per channel sent and received from each peer
  56. ### Sync
  57. #### Claim: Block Syncing is slow
  58. Bootstrapping a new node in a network to the height of the rest of the network is believed to
  59. take longer than users would like. Block sync requires fetching all of the blocks from
  60. peers and placing them into the local disk for storage. A useful line of inquiry
  61. is understanding how quickly a perfectly tuned system _could_ fetch all of the state
  62. over a network so that we understand how much overhead Tendermint actually adds.
  63. The operation is likely to be _incredibly_ dependent on the environment in which
  64. the node is being run. The factors that will influence syncing include:
  65. 1. Number of peers that a syncing node may fetch from.
  66. 2. Speed of the disk that a validator is writing to.
  67. 3. Speed of the network connection between the different peers that node is
  68. syncing from.
  69. We should calculate how quickly this operation _could possibly_ complete for common chains and nodes.
  70. To calculate how quickly this operation could possibly complete, we should assume that
  71. a node is reading at line-rate of the NIC and writing at the full drive speed to its
  72. local storage. Comparing this theoretical upper-limit to the actual sync times
  73. observed by node operators will give us a good point of comparison for understanding
  74. how much overhead Tendermint incurs.
  75. We should additionally add metrics to the blocksync operation to more clearly pinpoint
  76. slow operations. The following metrics should be added to the block syncing operation:
  77. * Time to fetch and validate each block
  78. * Time to execute a block
  79. * Blocks sync'd per unit time
  80. ### Application
  81. Applications performing complex state transitions have the potential to bottleneck
  82. the Tendermint node.
  83. #### Claim: ABCI block delivery could cause slowdown
  84. ABCI delivers blocks in several methods: `BeginBlock`, `DeliverTx`, `EndBlock`, `Commit`.
  85. Tendermint delivers transactions one-by-one via the `DeliverTx` call. Most of the
  86. transaction delivery in Tendermint occurs asynchronously and therefore appears unlikely to
  87. form a bottleneck in ABCI.
  88. After delivering all transactions, Tendermint then calls the `Commit` ABCI method.
  89. Tendermint [locks all access to the mempool][abci-commit-description] while `Commit`
  90. proceeds. This means that an application that is slow to execute all of its
  91. transactions or finalize state during the `Commit` method will prevent any new
  92. transactions from being added to the mempool. Apps that are slow to commit will
  93. prevent consensus from proceeded to the next consensus height since Tendermint
  94. cannot validate block proposals or produce block proposals without the
  95. AppHash obtained from the `Commit` method. We should add a metric for each
  96. step in the ABCI protocol to track the amount of time that a node spends communicating
  97. with the application at each step.
  98. #### Claim: ABCI serialization overhead causes slowdown
  99. The most common way to run a Tendermint application is using the Cosmos-SDK.
  100. The Cosmos-SDK runs the ABCI application within the same process as Tendermint.
  101. When an application is run in the same process as Tendermint, a serialization penalty
  102. is not paid. This is because the local ABCI client does not serialize method calls
  103. and instead passes the protobuf type through directly. This can be seen
  104. in [local_client.go][abci-local-client-code].
  105. Serialization and deserialization in the gRPC and socket protocol ABCI methods
  106. may cause slowdown. While these may cause issue, they are not part of the primary
  107. usecase of Tendermint and do not necessarily need to be addressed at this time.
  108. ### RPC
  109. #### Claim: The Query API is slow.
  110. The query API locks a mutex across the ABCI connections. This causes consensus to
  111. slow during queries, as ABCI is no longer able to make progress. This is known
  112. to be causing issue in the cosmos-sdk and is being addressed [in the sdk][sdk-query-fix]
  113. but a more robust solution may be required. Adding metrics to each ABCI client connection
  114. and message as described in the Application section of this document would allow us
  115. to further introspect the issue here.
  116. #### Claim: RPC Serialization may cause slowdown
  117. The Tendermint RPC uses a modified version of JSON-RPC. This RPC powers the `broadcast_tx_*` methods,
  118. which is a critical method for adding transactions to Tendermint at the moment. This method is
  119. likely invoked quite frequently on popular networks. Being able to perform efficiently
  120. on this common and critical operation is very important. The current JSON-RPC implementation
  121. relies heavily on type introspection via reflection, which is known to be very slow in
  122. Go. We should therefore produce benchmarks of this method to determine how much overhead
  123. we are adding to what, is likely to be, a very common operation.
  124. The other JSON-RPC methods are much less critical to the core functionality of Tendermint.
  125. While there may other points of performance consideration within the RPC, methods that do not
  126. receive high volumes of requests should not be prioritized for performance consideration.
  127. NOTE: Previous discussion of the RPC framework was done in [ADR 57][adr-57] and
  128. there is ongoing work to inspect and alter the JSON-RPC framework in [RFC 002][rfc-002].
  129. Much of these RPC-related performance considerations can either wait until the work of RFC 002 work is done or be
  130. considered concordantly with the in-flight changes to the JSON-RPC.
  131. ### Protocol
  132. #### Claim: Gossiping messages is a slow process
  133. Currently, for any validator to successfully vote in a consensus _step_, it must
  134. receive votes from greater than 2/3 of the validators on the network. In many cases,
  135. it's preferable to receive as many votes as possible from correct validators.
  136. This produces a quadratic increase in messages that are communicated as more validators join the network.
  137. (Each of the N validators must communicate with all other N-1 validators).
  138. This large number of messages communicated per step has been identified to impact
  139. performance of the protocol. Given that the number of messages communicated has been
  140. identified as a bottleneck, it would be extremely valuable to gather data on how long
  141. it takes for popular chains with many validators to gather all votes within a step.
  142. Metrics that would improve visibility into this include:
  143. * Amount of time for a node to gather votes in a step.
  144. * Amount of time for a node to gather all block parts.
  145. * Number of votes each node sends to gossip (i.e. not its own votes, but votes it is
  146. transmitting for a peer).
  147. * Total number of votes each node sends to receives (A node may receive duplicate votes
  148. so understanding how frequently this occurs will be valuable in evaluating the performance
  149. of the gossip system).
  150. #### Claim: Hashing Txs causes slowdown in Tendermint
  151. Using a faster hash algorithm for Tx hashes is currently a point of discussion
  152. in Tendermint. Namely, it is being considered as part of the [modular hashing proposal][modular-hashing].
  153. It is currently unknown if hashing transactions in the Mempool forms a significant bottleneck.
  154. Although it does not appear to be documented as slow, there are a few open github
  155. issues that indicate a possible user preference for a faster hashing algorithm,
  156. including [issue 2187][issue-2187] and [issue 2186][issue-2186].
  157. It is likely worth investigating what order of magnitude Tx hashing takes in comparison to other
  158. aspects of adding a Tx to the mempool. It is not currently clear if the rate of adding Tx
  159. to the mempool is a source of user pain. We should not endeavor to make large changes to
  160. consensus critical components without first being certain that the change is highly
  161. valuable and impactful.
  162. ### Digital Signatures
  163. #### Claim: Verification of digital signatures may cause slowdown in Tendermint
  164. Working with cryptographic signatures can be computationally expensive. The cosmos
  165. hub uses [ed25519 signatures][hub-signature]. The library performing signature
  166. verification in Tendermint on votes is [benchmarked][ed25519-bench] to be able to perform an `ed25519`
  167. signature in 75μs on a decently fast CPU. A validator in the Cosmos Hub performs
  168. 3 sets of verifications on the signatures of the 140 validators in the Hub
  169. in a consensus round, during block verification, when verifying the prevotes, and
  170. when verifying the precommits. With no batching, this would be roughly `3ms` per
  171. round. It is quite unlikely, therefore, that this accounts for any serious amount
  172. of the ~7 seconds of block time per height in the Hub.
  173. This may cause slowdown when syncing, since the process needs to constantly verify
  174. signatures. It's possible that improved signature aggregation will lead to improved
  175. light client or other syncing performance. In general, a metric should be added
  176. to track block rate while blocksyncing.
  177. #### Claim: Our use of digital signatures in the consensus protocol contributes to performance issue
  178. Currently, Tendermint's digital signature verification requires that all validators
  179. receive all vote messages. Each validator must receive the complete digital signature
  180. along with the vote message that it corresponds to. This means that all N validators
  181. must receive messages from at least 2/3 of the N validators in each consensus
  182. round. Given the potential for oddly shaped network topologies and the expected
  183. variable network roundtrip times of a few hundred milliseconds in a blockchain,
  184. it is highly likely that this amount of gossiping is leading to a significant amount
  185. of the slowdown in the Cosmos Hub and in Tendermint consensus.
  186. ### Tendermint Event System
  187. #### Claim: The event system is a bottleneck in Tendermint
  188. The Tendermint Event system is used to communicate and store information about
  189. internal Tendermint execution. The system uses channels internally to send messages
  190. to different subscribers. Sending an event [blocks on the internal channel][event-send].
  191. The default configuration is to [use an unbuffered channel for event publishes][event-buffer-capacity].
  192. Several consumers of the event system also use an unbuffered channel for reads.
  193. An example of this is the [event indexer][event-indexer-unbuffered], which takes an
  194. unbuffered subscription to the event system. The result is that these unbuffered readers
  195. can cause writes to the event system to block or slow down depending on contention in the
  196. event system. This has implications for the consensus system, which [publishes events][consensus-event-send].
  197. To better understand the performance of the event system, we should add metrics to track the timing of
  198. event sends. The following metrics would be a good start for tracking this performance:
  199. * Time in event send, labeled by Event Type
  200. * Time in event receive, labeled by subscriber
  201. * Event throughput, measured in events per unit time.
  202. ### References
  203. [modular-hashing]: https://github.com/tendermint/tendermint/pull/6773
  204. [issue-2186]: https://github.com/tendermint/tendermint/issues/2186
  205. [issue-2187]: https://github.com/tendermint/tendermint/issues/2187
  206. [rfc-002]: https://github.com/tendermint/tendermint/pull/6913
  207. [adr-57]: https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-057-RPC.md
  208. [issue-1319]: https://github.com/tendermint/tendermint/issues/1319
  209. [abci-commit-description]: https://github.com/tendermint/spec/blob/master/spec/abci/apps.md#commit
  210. [abci-local-client-code]: https://github.com/tendermint/tendermint/blob/511bd3eb7f037855a793a27ff4c53c12f085b570/abci/client/local_client.go#L84
  211. [hub-signature]: https://github.com/cosmos/gaia/blob/0ecb6ed8a244d835807f1ced49217d54a9ca2070/docs/resources/genesis.md#consensus-parameters
  212. [ed25519-bench]: https://github.com/oasisprotocol/curve25519-voi/blob/d2e7fc59fe38c18ca990c84c4186cba2cc45b1f9/PERFORMANCE.md
  213. [event-send]: https://github.com/tendermint/tendermint/blob/5bd3b286a2b715737f6d6c33051b69061d38f8ef/libs/pubsub/pubsub.go#L338
  214. [event-buffer-capacity]: https://github.com/tendermint/tendermint/blob/5bd3b286a2b715737f6d6c33051b69061d38f8ef/types/event_bus.go#L14
  215. [event-indexer-unbuffered]: https://github.com/tendermint/tendermint/blob/5bd3b286a2b715737f6d6c33051b69061d38f8ef/state/indexer/indexer_service.go#L39
  216. [consensus-event-send]: https://github.com/tendermint/tendermint/blob/5bd3b286a2b715737f6d6c33051b69061d38f8ef/internal/consensus/state.go#L1573
  217. [sdk-query-fix]: https://github.com/cosmos/cosmos-sdk/pull/10045