You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

420 lines
21 KiB

  1. # RFC 002: Interprocess Communication (IPC) in Tendermint
  2. ## Changelog
  3. - 08-Sep-2021: Initial draft (@creachadair).
  4. ## Abstract
  5. Communication in Tendermint among consensus nodes, applications, and operator
  6. tools all use different message formats and transport mechanisms. In some
  7. cases there are multiple options. Having all these options complicates both the
  8. code and the developer experience, and hides bugs. To support a more robust,
  9. trustworthy, and usable system, we should document which communication paths
  10. are essential, which could be removed or reduced in scope, and what we can
  11. improve for the most important use cases.
  12. This document proposes a variety of possible improvements of varying size and
  13. scope. Specific design proposals should get their own documentation.
  14. ## Background
  15. The Tendermint state replication engine has a complex IPC footprint.
  16. 1. Consensus nodes communicate with each other using a networked peer-to-peer
  17. message-passing protocol.
  18. 2. Consensus nodes communicate with the application whose state is being
  19. replicated via the [Application BlockChain Interface (ABCI)][abci].
  20. 3. Consensus nodes export a network-accessible [RPC service][rpc-service] to
  21. support operations (bootstrapping, debugging) and synchronization of [light clients][light-client].
  22. This interface is also used by the [`tendermint` CLI][tm-cli].
  23. 4. Consensus nodes export a gRPC service exposing a subset of the methods of
  24. the RPC service described by (3). This was intended to simplify the
  25. implementation of tools that already use gRPC to communicate with an
  26. application (via the Cosmos SDK), and wanted to also talk to the consensus
  27. node without implementing yet another RPC protocol.
  28. The gRPC interface to the consensus node has been deprecated and is slated
  29. for removal in the forthcoming Tendermint v0.36 release.
  30. 5. Consensus nodes may optionally communicate with a "remote signer" that holds
  31. a validator key and can provide public keys and signatures to the consensus
  32. node. One of the stated goals of this configuration is to allow the signer
  33. to be run on a private network, separate from the consensus node, so that a
  34. compromise of the consensus node from the public network would be less
  35. likely to expose validator keys.
  36. ## Discussion: Transport Mechanisms
  37. ### Remote Signer Transport
  38. A remote signer communicates with the consensus node in one of two ways:
  39. 1. "Raw": Using a TCP or Unix-domain socket which carries varint-prefixed
  40. protocol buffer messages. In this mode, the consensus node is the server,
  41. and the remote signer is the client.
  42. This mode has been deprecated, and is intended to be removed.
  43. 2. gRPC: This mode uses the same protobuf messages as "Raw" node, but uses a
  44. standard encrypted gRPC HTTP/2 stub as the transport. In this mode, the
  45. remote signer is the server and the consensus node is the client.
  46. ### ABCI Transport
  47. In ABCI, the _application_ is the server, and the Tendermint consensus engine
  48. is the client. Most applications implement the server using the [Cosmos SDK][cosmos-sdk],
  49. which handles low-level details of the ABCI interaction and provides a
  50. higher-level interface to the rest of the application. The SDK is written in Go.
  51. Beneath the SDK, the application communicates with Tendermint core in one of
  52. two ways:
  53. - In-process direct calls (for applications written in Go and compiled against
  54. the Tendermint code). This is an optimization for the common case where an
  55. application is written in Go, to save on the overhead of marshaling and
  56. unmarshaling requests and responses within the same process:
  57. [`abci/client/local_client.go`][local-client]
  58. - A custom remote procedure protocol built on wire-format protobuf messages
  59. using a socket (the "socket protocol"): [`abci/server/socket_server.go`][socket-server]
  60. The SDK also provides a [gRPC service][sdk-grpc] accessible from outside the
  61. application, allowing transactions to be broadcast to the network, look up
  62. transactions, and simulate transaction costs.
  63. ### RPC Transport
  64. The consensus node RPC service allows callers to query consensus parameters
  65. (genesis data, transactions, commits), node status (network info, health
  66. checks), application state (abci_query, abci_info), mempool state, and other
  67. attributes of the node and its application. The service also provides methods
  68. allowing transactions and evidence to be injected ("broadcast") into the
  69. blockchain.
  70. The RPC service is exposed in several ways:
  71. - HTTP GET: Queries may be sent as URI parameters, with method names in the path.
  72. - HTTP POST: Queries may be sent as JSON-RPC request messages in the body of an
  73. HTTP POST request. The server uses a custom implementation of JSON-RPC that
  74. is not fully compatible with the [JSON-RPC 2.0 spec][json-rpc], but handles
  75. the common cases.
  76. - Websocket: Queries may be sent as JSON-RPC request messages via a websocket.
  77. This transport uses more or less the same JSON-RPC plumbing as the HTTP POST
  78. handler.
  79. The websocket endpoint also includes three methods that are _only_ exported
  80. via websocket, which appear to support event subscription.
  81. - gRPC: A subset of queries may be issued in protocol buffer format to the gRPC
  82. interface described above under (4). As noted, this endpoint is deprecated
  83. and will be removed in v0.36.
  84. ### Opportunities for Simplification
  85. **Claim:** There are too many IPC mechanisms.
  86. The preponderance of ABCI usage is via the Cosmos SDK, which means the
  87. application and the consensus node are compiled together into a single binary,
  88. and the consensus node calls the ABCI methods of the application directly as Go
  89. functions.
  90. We also need a true IPC transport to support ABCI applications _not_ written in
  91. Go. There are also several known applications written in Rust, for example
  92. (including [Anoma](https://github.com/anoma/anoma), Penumbra,
  93. [Oasis](https://github.com/oasisprotocol/oasis-core), Twilight, and
  94. [Nomic](https://github.com/nomic-io/nomic)). Ideally we will have at most one
  95. such transport "built-in": More esoteric cases can be handled by a custom proxy.
  96. Pragmatically, gRPC is probably the right choice here.
  97. The primary consumers of the multi-headed "RPC service" today are the light
  98. client and the `tendermint` command-line client. There is probably some local
  99. use via curl, but I expect that is mostly ad hoc. Ethan reports that nodes are
  100. often configured with the ports to the RPC service blocked, which is good for
  101. security but complicates use by the light client.
  102. ### Context: Remote Signer Issues
  103. Since the remote signer needs a secure communication channel to exchange keys
  104. and signatures, and is expected to run truly remotely from the node (i.e., on a
  105. separate physical server), there is not a whole lot we can do here. We should
  106. finish the deprecation and removal of the "raw" socket protocol between the
  107. consensus node and remote signers, but the use of gRPC is appropriate.
  108. The main improvement we can make is to simplify the implementation quite a bit,
  109. once we no longer need to support both "raw" and gRPC transports.
  110. ### Context: ABCI Issues
  111. In the original design of ABCI, the presumption was that all access to the
  112. application should be mediated by the consensus node. The idea is that outside
  113. access could change application state and corrupt the consensus process, which
  114. relies on the application to be deterministic. Of course, even without outside
  115. access an application could behave nondeterministically, but allowing other
  116. programs to send it requests was seen as courting trouble.
  117. Conversely, users noted that most of the time, tools written for a particular
  118. application don't want to talk to the consensus module directly. The
  119. application "owns" the state machine the consensus engine is replicating, so
  120. tools that care about application state should talk to the application.
  121. Otherwise, they would have to bake in knowledge about Tendermint (e.g., its
  122. interfaces and data structures) just because of the mediation.
  123. For clients to talk directly to the application, however, there is another
  124. concern: The consensus node is the ABCI _client_, so it is inconvenient for the
  125. application to "push" work into the consensus module via ABCI itself. The
  126. current implementation works around this by calling the consensus node's RPC
  127. service, which exposes an `ABCIQuery` kitchen-sink method that allows the
  128. application a way to poke ABCI messages in the other direction.
  129. Without this RPC method, you could work around this (at least in principle) by
  130. having the consensus module "poll" the application for work that needs done,
  131. but that has unsatisfactory implications for performance and robustness, as
  132. well as being harder to understand.
  133. There has apparently been discussion about trying to make a more bidirectional
  134. communication between the consensus node and the application, but this issue
  135. seems to still be unresolved.
  136. Another complication of ABCI is that it requires the application (server) to
  137. maintain [four separate connections][abci-conn]: One for "consensus" operations
  138. (BeginBlock, EndBlock, DeliverTx, Commit), one for "mempool" operations, one
  139. for "query" operations, and one for "snapshot" (state synchronization) operations.
  140. The rationale seems to have been that these groups of operations should be able
  141. to proceed concurrently with each other. In practice, it results in a very complex
  142. state management problem to coordinate state updates between the separate streams.
  143. While application authors in Go are mostly insulated from that complexity by the
  144. Cosmos SDK, the plumbing to maintain those separate streams is complicated, hard
  145. to understand, and we suspect it contains concurrency bugs and/or lock contention
  146. issues affecting performance that are subtle and difficult to pin down.
  147. Even without changing the semantics of any ABCI operations, this code could be
  148. made smaller and easier to debug by separating the management of concurrency
  149. and locking from the IPC transport: If all requests and responses are routed
  150. through one connection, the server can explicitly maintain priority queues for
  151. requests and responses, and make less-conservative decisions about when locks
  152. are (or aren't) required to synchronize state access. With independent queues,
  153. the server must lock conservatively, and no optimistic scheduling is practical.
  154. This would be a tedious implementation change, but should be achievable without
  155. breaking any of the existing interfaces. More importantly, it could potentially
  156. address a lot of difficult concurrency and performance problems we currently
  157. see anecdotally but have difficultly isolating because of how intertwined these
  158. separate message streams are at runtime.
  159. TODO: Impact of ABCI++ for this topic?
  160. ### Context: RPC Issues
  161. The RPC system serves several masters, and has a complex surface area. I
  162. believe there are some improvements that can be exposed by separating some of
  163. these concerns.
  164. The Tendermint light client currently uses the RPC service to look up blocks
  165. and transactions, and to forward ABCI queries to the application. The light
  166. client proxy uses the RPC service via a websocket. The Cosmos IBC relayer also
  167. uses the RPC service via websocket to watch for transaction events, and uses
  168. the `ABCIQuery` method to fetch information and proofs for posted transactions.
  169. Some work is already underway toward using P2P message passing rather than RPC
  170. to synchronize light client state with the rest of the network. IBC relaying,
  171. however, requires access to the event system, which is currently not accessible
  172. except via the RPC interface. Event subscription _could_ be exposed via P2P,
  173. but that is a larger project since it adds P2P communication load, and might
  174. thus have an impact on the performance of consensus.
  175. If event subscription can be moved into the P2P network, we could entirely
  176. remove the websocket transport, even for clients that still need access to the
  177. RPC service. Until then, we may still be able to reduce the scope of the
  178. websocket endpoint to _only_ event subscription, by moving uses of the RPC
  179. server as a proxy to ABCI over to the gRPC interface.
  180. Having the RPC server still makes sense for local bootstrapping and operations,
  181. but can be further simplified. Here are some specific proposals:
  182. - Remove the HTTP GET interface entirely.
  183. - Simplify JSON-RPC plumbing to remove unnecessary reflection and wrapping.
  184. - Remove the gRPC interface (this is already planned for v0.36).
  185. - Separate the websocket interface from the rest of the RPC service, and
  186. restrict it to only event subscription.
  187. Eventually we should try to emove the websocket interface entirely, but we
  188. will need to revisit that (probably in a new RFC) once we've done some of the
  189. easier things.
  190. These changes would preserve the ability of operators to issue queries with
  191. curl (but would require using JSON-RPC instead of URI parameters). That would
  192. be a little less user-friendly, but for a use case that should not be that
  193. prevalent.
  194. These changes would also preserve compatibility with existing JSON-RPC based
  195. code paths like the `tendermint` CLI and the light client (even ahead of
  196. further work to remove that dependency).
  197. **Design goal:** An operator should be able to disable non-local access to the
  198. RPC server on any node in the network without impairing the ability of the
  199. network to function for service of state replication, including light clients.
  200. **Design principle:** All communication required to implement and monitor the
  201. consensus network should use P2P, including the various synchronizations.
  202. ### Options for ABCI Transport
  203. The majority of current usage is in Go, and the majority of that is mediated by
  204. the Cosmos SDK, which uses the "direct call" interface. There is probably some
  205. opportunity to clean up the implementation of that code, notably by inverting
  206. which interface is at the "top" of the abstraction stack (currently it acts
  207. like an RPC interface, and escape-hatches into the direct call). However, this
  208. general approach works fine and doesn't need to be fundamentally changed.
  209. For applications _not_ written in Go, the two remaining options are the
  210. "socket" protocol (another variation on varint-prefixed protobuf messages over
  211. an unstructured stream) and gRPC. It would be nice if we could get rid of one
  212. of these to reduce (unneeded?) optionality.
  213. Since both the socket protocol and gRPC depend on protocol buffers, the
  214. "socket" protocol is the most obvious choice to remove. While gRPC is more
  215. complex, the set of languages that _have_ protobuf support but _lack_ gRPC
  216. support is small. Moreover, gRPC is already widely used in the rest of the
  217. ecosystem (including the Cosmos SDK).
  218. If some use case did arise later that can't work with gRPC, it would not be too
  219. difficult for that application author to write a little proxy (in Go) that
  220. bridges the convenient SDK APIs into a simpler protocol than gRPC.
  221. **Design principle:** It is better for an uncommon special case to carry the
  222. burdens of its specialness, than to bake an escape hatch into the infrastructure.
  223. **Recommendation:** We should deprecate and remove the socket protocol.
  224. ### Options for RPC Transport
  225. [ADR 057][adr-57] proposes using gRPC for the Tendermint RPC implementation.
  226. This is still possible, but if we are able to simplify and decouple the
  227. concerns as described above, I do not think it should be necessary.
  228. While JSON-RPC is not the best possible RPC protocol for all situations, it has
  229. some advantages over gRPC for our domain. Specifically:
  230. - It is easy to call JSON-RPC manually from the command-line, which helps with
  231. a common concern for the RPC service, local debugging and operations.
  232. Relatedly: JSON is relatively easy for humans to read and write, and it can
  233. be easily copied and pasted to share sample queries and debugging results in
  234. chat, issue comments, and so on. Ideally, the RPC service will not be used
  235. for activities where the costs of a text protocol are important compared to
  236. its legibility and manual usability benefits.
  237. - gRPC has an enormous dependency footprint for both clients and servers, and
  238. many of the features it provides to support security and performance
  239. (encryption, compression, streaming, etc.) are mostly irrelevant to local
  240. use. Tendermint already needs to include a gRPC client for the remote signer,
  241. but if we can avoid the need for a _client_ to depend on gRPC, that is a win
  242. for usability.
  243. - If we intend to migrate light clients off RPC to use P2P entirely, there is
  244. no advantage to forcing a temporary migration to gRPC along the way; and once
  245. the light client is not dependent on the RPC service, the efficiency of the
  246. protocol is much less important.
  247. - We can still get the benefits of generated data types using protocol buffers, even
  248. without using gRPC:
  249. - Protobuf defines a standard JSON encoding for all message types so
  250. languages with protobuf support do not need to worry about type mapping
  251. oddities.
  252. - Using JSON means that even languages _without_ good protobuf support can
  253. implement the protocol with a bit more work, and I expect this situation to
  254. be rare.
  255. Even if a language lacks a good standard JSON-RPC mechanism, the protocol is
  256. lightweight and can be implemented by simple send/receive over TCP or
  257. Unix-domain sockets with no need for code generation, encryption, etc. gRPC
  258. uses a complex HTTP/2 based transport that is not easily replicated.
  259. ### Future Work
  260. The background and proposals sketched above focus on the existing structure of
  261. Tendermint and improvements we can make in the short term. It is worthwhile to
  262. also consider options for longer-term broader changes to the IPC ecosystem.
  263. The following outlines some ideas at a high level:
  264. - **Consensus service:** Today, the application and the consensus node are
  265. nominally connected only via ABCI. Tendermint was originally designed with
  266. the assumption that all communication with the application should be mediated
  267. by the consensus node. Based on further experience, however, the design goal
  268. is now that the _application_ should be the mediator of application state.
  269. As noted above, however, ABCI is a client/server protocol, with the
  270. application as the server. For outside clients that turns out to have been a
  271. good choice, but it complicates the relationship between the application and
  272. the consensus node: Previously transactions were entered via the node, now
  273. they are entered via the app.
  274. We have worked around this by using the Tendermint RPC service to give the
  275. application a "back channel" to the consensus node, so that it can push
  276. transactions back into the consensus network. But the RPC service exposes a
  277. lot of other functionality, too, including event subscription, block and
  278. transaction queries, and a lot of node status information.
  279. Even if we can't easily "fix" the orientation of the ABCI relationship, we
  280. could improve isolation by splitting out the parts of the RPC service that
  281. the application needs as a back-channel, and sharing those _only_ with the
  282. application. By defining a "consensus service", we could give the application
  283. a way to talk back limited to only the capabilities it needs. This approach
  284. has the benefit that we could do it without breaking existing use, and if we
  285. later did "fix" the ABCI directionality, we could drop the special case
  286. without disrupting the rest of the RPC interface.
  287. - **Event service:** Right now, the IBC relayer relies on the Tendermint RPC
  288. service to provide a stream of block and transaction events, which it uses to
  289. discover which transactions need relaying to other chains. While I think
  290. that event subscription should eventually be handled via P2P, we could gain
  291. some immediate benefit by splitting out event subscription from the rest of
  292. the RPC service.
  293. In this model, an event subscription service would be exposed on the public
  294. network, but on a different endpoint. This would remove the need for the RPC
  295. service to support the websocket protocol, and would allow operators to
  296. isolate potentially sensitive status query results from the public network.
  297. At the moment the relayers also use the RPC service to get block data for
  298. synchronization, but work is already in progress to handle that concern via
  299. the P2P layer. Once that's done, event subscription could be separated.
  300. Separating parts of the existing RPC service is not without cost: It might
  301. require additional connection endpoints, for example, though it is also not too
  302. difficult for multiple otherwise-independent services to share a connection.
  303. In return, though, it would become easier to reduce transport options and for
  304. operators to independently control access to sensitive data. Considering the
  305. viability and implications of these ideas is beyond the scope of this RFC, but
  306. they are documented here since they follow from the background we have already
  307. discussed.
  308. ## References
  309. [abci]: https://github.com/tendermint/spec/tree/95cf253b6df623066ff7cd4074a94e7a3f147c7a/spec/abci
  310. [rpc-service]: https://docs.tendermint.com/master/rpc/
  311. [light-client]: https://docs.tendermint.com/master/tendermint-core/light-client.html
  312. [tm-cli]: https://github.com/tendermint/tendermint/tree/master/cmd/tendermint
  313. [cosmos-sdk]: https://github.com/cosmos/cosmos-sdk/
  314. [local-client]: https://github.com/tendermint/tendermint/blob/master/abci/client/local_client.go
  315. [socket-server]: https://github.com/tendermint/tendermint/blob/master/abci/server/socket_server.go
  316. [sdk-grpc]: https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/tx#ServiceServer
  317. [json-rpc]: https://www.jsonrpc.org/specification
  318. [abci-conn]: https://github.com/tendermint/spec/blob/master/spec/abci/apps.md#state
  319. [adr-57]: https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-057-RPC.md