You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

531 lines
27 KiB

  1. # ADR 062: P2P Architecture and Abstractions
  2. ## Changelog
  3. - 2020-11-09: Initial version (@erikgrinaker)
  4. - 2020-11-13: Remove stream IDs, move peer errors onto channel, note on moving PEX into core (@erikgrinaker)
  5. - 2020-11-16: Notes on recommended reactor implementation patterns, approve ADR (@erikgrinaker)
  6. ## Context
  7. In [ADR 061](adr-061-p2p-refactor-scope.md) we decided to refactor the peer-to-peer (P2P) networking stack. The first phase is to redesign and refactor the internal P2P architecture, while retaining protocol compatibility as far as possible.
  8. ## Alternative Approaches
  9. Several variations of the proposed design were considered, including e.g. calling interface methods instead of passing messages (like the current architecture), merging channels with streams, exposing the internal peer data structure to reactors, being message format-agnostic via arbitrary codecs, and so on. This design was chosen because it has very loose coupling, is simpler to reason about and more convenient to use, avoids race conditions and lock contention for internal data structures, gives reactors better control of message ordering and processing semantics, and allows for QoS scheduling and backpressure in a very natural way.
  10. [multiaddr](https://github.com/multiformats/multiaddr) was considered as a transport-agnostic peer address format over regular URLs, but it does not appear to have very widespread adoption, and advanced features like protocol encapsulation and tunneling do not appear to be immediately useful to us.
  11. There were also proposals to use LibP2P instead of maintaining our own P2P stack, which were rejected (for now) in [ADR 061](adr-061-p2p-refactor-scope.md).
  12. ## Decision
  13. The P2P stack will be redesigned as a message-oriented architecture, primarily relying on Go channels for communication and scheduling. It will use IO stream transports to exchange raw bytes with individual peers, bidirectional peer-addressable channels to send and receive Protobuf messages, and a router to route messages between reactors and peers. Message passing is asynchronous with at-most-once delivery.
  14. ## Detailed Design
  15. This ADR is primarily concerned with the architecture and interfaces of the P2P stack, not implementation details. Separate ADRs may be submitted for individual components, since implementation may be non-trivial. The interfaces described here should therefore be considered a rough architecture outline, not a complete and final design.
  16. Primary design objectives have been:
  17. * Loose coupling between components, for a simpler, more robust, and test-friendly architecture.
  18. * Pluggable transports (not necessarily networked).
  19. * Better scheduling of messages, with improved prioritization, backpressure, and performance.
  20. * Centralized peer lifecycle and connection management.
  21. * Better peer address detection, advertisement, and exchange.
  22. * Wire-level backwards compatibility with current P2P network protocols, except where it proves too obstructive.
  23. The main abstractions in the new stack are:
  24. * `peer`: A node in the network, uniquely identified by a `PeerID` and stored in a `peerStore`.
  25. * `Transport`: An arbitrary mechanism to exchange bytes with a peer using IO `Stream`s across a `Connection`.
  26. * `Channel`: A bidirectional channel to asynchronously exchange Protobuf messages with peers addressed with `PeerID`.
  27. * `Router`: Maintains transport connections to relevant peers and routes channel messages.
  28. * Reactor: A design pattern loosely defined as "something which listens on a channel and reacts to messages".
  29. These abstractions are illustrated in the following diagram (representing the internals of node A) and described in detail below.
  30. ![P2P Architecture Diagram](img/adr-062-architecture.svg)
  31. ### Transports
  32. Transports are arbitrary mechanisms for exchanging raw bytes with a peer. For example, a gRPC transport would connect to a peer over TCP/IP and send data using the gRPC protocol, while an in-memory transport might communicate with a peer running in another goroutine using internal byte buffers. Note that transports don't have a notion of a `peer` as such - instead, they communicate with an arbitrary endpoint address (e.g. IP address and port number), to decouple them from the rest of the P2P stack.
  33. Transports must satisfy the following requirements:
  34. * Be connection-oriented, and support both listening for inbound connections and making outbound connections using endpoint addresses.
  35. * Support multiple logical IO streams within a single connection, to take full advantage of protocols with native stream support. For example, QUIC supports multiple independent streams, while HTTP/2 and MConn multiplex logical streams onto a single TCP connection.
  36. * Provide the public key of the peer, and possibly encrypt or sign the traffic as appropriate. This should be compared with known data (e.g. the peer ID) to authenticate the peer and avoid man-in-the-middle attacks.
  37. The initial transport implementation will be a port of the current MConn protocol currently used by Tendermint, and should be backwards-compatible at the wire level as far as possible. This will be followed by an in-memory transport for testing, and a QUIC transport that may eventually replace MConn.
  38. The `Transport` interface is:
  39. ```go
  40. // Transport is an arbitrary mechanism for exchanging bytes with a peer.
  41. type Transport interface {
  42. // Accept waits for the next inbound connection on a listening endpoint.
  43. Accept(context.Context) (Connection, error)
  44. // Dial creates an outbound connection to an endpoint.
  45. Dial(context.Context, Endpoint) (Connection, error)
  46. // Endpoints lists endpoints the transport is listening on. Any endpoint IP
  47. // addresses do not need to be normalized in any way (e.g. 0.0.0.0 is
  48. // valid), as they should be preprocessed before being advertised.
  49. Endpoints() []Endpoint
  50. }
  51. ```
  52. How the transport configures listening is transport-dependent, and not covered by the interface. This typically happens during transport construction, where a single instance of the transport is created and set to listen on an appropriate network interface before being passed to the router.
  53. #### Endpoints
  54. `Endpoint` represents a transport endpoint (e.g. an IP address and port). A connection always has two endpoints: one at the local node and one at the remote peer. Outbound connections to remote endpoints are made via `Dial()`, and inbound connections to listening endpoints are returned via `Accept()`.
  55. The `Endpoint` struct is:
  56. ```go
  57. // Endpoint represents a transport connection endpoint, either local or remote.
  58. type Endpoint struct {
  59. // Protocol specifies the transport protocol, used by the router to pick a
  60. // transport for an endpoint.
  61. Protocol Protocol
  62. // Path is an optional, arbitrary transport-specific path or identifier.
  63. Path string
  64. // IP is an IP address (v4 or v6) to connect to. If set, this defines the
  65. // endpoint as a networked endpoint.
  66. IP net.IP
  67. // Port is a network port (either TCP or UDP). If not set, a default port
  68. // may be used depending on the protocol.
  69. Port uint16
  70. }
  71. // Protocol identifies a transport protocol.
  72. type Protocol string
  73. ```
  74. Endpoints are arbitrary transport-specific addresses, but if they are networked they must use IP addresses and thus rely on IP as a fundamental packet routing protocol. This enables policies for address discovery, advertisement, and exchange - for example, a private `192.168.0.0/24` IP address should only be advertised to peers on that IP network, while the public address `8.8.8.8` may be advertised to all peers. Similarly, any port numbers if given must represent TCP and/or UDP port numbers, in order to use [UPnP](https://en.wikipedia.org/wiki/Universal_Plug_and_Play) to autoconfigure e.g. NAT gateways.
  75. Non-networked endpoints (without an IP address) are considered local, and will only be advertised to other peers connecting via the same protocol. For example, an in-memory transport used for testing might have `Endpoint{Protocol: "memory", Path: "foo"}` as an address for the node "foo", and this should only be advertised to other nodes using `Protocol: "memory"`.
  76. #### Connections and Streams
  77. A connection represents an established transport connection between two endpoints (and thus two nodes), which can be used to exchange bytes via logically distinct IO streams. Connections are set up either via `Transport.Dial()` (outbound) or `Transport.Accept()` (inbound). The caller is responsible for verifying the remote peer's public key as returned by the connection, following the current MConn protocol behavior for now.
  78. Data is exchanged over IO streams created with `Connection.Stream()`. These implement the standard Go `io.Reader` and `io.Writer` interfaces to read and write bytes. Transports are free to choose how to implement such streams, e.g. by taking advantage of native stream support in the underlying protocol or through multiplexing.
  79. `Connection` and the related `Stream` interfaces are:
  80. ```go
  81. // Connection represents an established connection between two endpoints.
  82. type Connection interface {
  83. // Stream creates a new logically distinct IO stream within the connection.
  84. Stream() (Stream, error)
  85. // LocalEndpoint returns the local endpoint for the connection.
  86. LocalEndpoint() Endpoint
  87. // RemoteEndpoint returns the remote endpoint for the connection.
  88. RemoteEndpoint() Endpoint
  89. // PubKey returns the public key of the remote peer.
  90. PubKey() crypto.PubKey
  91. // Close closes the connection.
  92. Close() error
  93. }
  94. // Stream represents a single logical IO stream within a connection.
  95. type Stream interface {
  96. io.Reader // Read([]byte) (int, error)
  97. io.Writer // Write([]byte) (int, error)
  98. io.Closer // Close() error
  99. }
  100. ```
  101. ### Peers
  102. Peers are other Tendermint network nodes. Each peer is identified by a unique `PeerID`, and has a set of `PeerAddress` addresses expressed as URLs that they can be reached at. Examples of peer addresses might be e.g.:
  103. * `mconn://b10c@host.domain.com:25567/path`
  104. * `unix:///var/run/tendermint/peer.sock`
  105. * `memory:testpeer`
  106. Addresses are resolved into one or more transport endpoints, e.g. by resolving DNS hostnames into IP addresses (which should be refreshed periodically). Peers should always be expressed as address URLs, and never as endpoints which are a lower-level construct.
  107. ```go
  108. // PeerID is a unique peer ID, generally expressed in hex form.
  109. type PeerID []byte
  110. // PeerAddress is a peer address URL. The User field, if set, gives the
  111. // hex-encoded remote PeerID, which should be verified with the remote peer's
  112. // public key as returned by the connection.
  113. type PeerAddress url.URL
  114. // Resolve resolves a PeerAddress into a set of Endpoints, typically by
  115. // expanding out a DNS name in Host to its IP addresses. Field mapping:
  116. //
  117. // Scheme → Endpoint.Protocol
  118. // Host → Endpoint.IP
  119. // Port → Endpoint.Port
  120. // Path+Query+Fragment,Opaque → Endpoint.Path
  121. //
  122. func (a PeerAddress) Resolve(ctx context.Context) []Endpoint { return nil }
  123. ```
  124. The P2P stack needs to track a lot of internal information about peers, such as endpoints, status, priorities, and so on. This is done in an internal `peer` struct, which should not be exposed outside of the `p2p` package (e.g. to reactors) in order to avoid race conditions and lock contention - other packages should use `PeerID`.
  125. The `peer` struct might look like the following, but is intentionally underspecified and will depend on implementation requirements (for example, it will almost certainly have to track statistics about connection failures and retries):
  126. ```go
  127. // peer tracks internal status information about a peer.
  128. type peer struct {
  129. ID PeerID
  130. Status PeerStatus
  131. Priority PeerPriority
  132. Endpoints map[PeerAddress][]Endpoint // Resolved endpoints by address.
  133. }
  134. // PeerStatus specifies peer statuses.
  135. type PeerStatus string
  136. const (
  137. PeerStatusNew = "new" // New peer which we haven't tried to contact yet.
  138. PeerStatusUp = "up" // Peer which we have an active connection to.
  139. PeerStatusDown = "down" // Peer which we're temporarily disconnected from.
  140. PeerStatusRemoved = "removed" // Peer which has been removed.
  141. PeerStatusBanned = "banned" // Peer which is banned for misbehavior.
  142. )
  143. // PeerPriority specifies peer priorities.
  144. type PeerPriority int
  145. const (
  146. PeerPriorityNormal PeerPriority = iota + 1
  147. PeerPriorityValidator
  148. PeerPriorityPersistent
  149. )
  150. ```
  151. Peer information is stored in a `peerStore`, which may be persisted in an underlying database, and will replace the current address book either partially or in full. It is kept internal to avoid race conditions and tight coupling, and should at the very least contain basic CRUD functionality as outlined below, but will likely need additional functionality and is intentionally underspecified:
  152. ```go
  153. // peerStore contains information about peers, possibly persisted to disk.
  154. type peerStore struct {
  155. peers map[string]*peer // Entire set in memory, with PeerID.String() keys.
  156. db dbm.DB // Database for persistence, if non-nil.
  157. }
  158. func (p *peerStore) Delete(id PeerID) error { return nil }
  159. func (p *peerStore) Get(id PeerID) (peer, bool) { return peer{}, false }
  160. func (p *peerStore) List() []peer { return nil }
  161. func (p *peerStore) Set(peer peer) error { return nil }
  162. ```
  163. Peer address detection, advertisement and exchange (including detection of externally-reachable addresses via e.g. NAT gateways) is out of scope for this ADR, but may be covered in a separate ADR. The current PEX reactor should probably be absorbed into the core P2P stack and protocol instead of running as a separate reactor, since this needs to mutate the core peer data structures and will thus be tightly coupled with the router.
  164. ### Channels
  165. While low-level data exchange happens via transport IO streams, the high-level API is based on a bidirectional `Channel` that can send and receive Protobuf messages addressed by `PeerID`. A channel is identified by an arbitrary `ChannelID` identifier, and can exchange Protobuf messages of one specific type (since the type to unmarshal into must be known). Message delivery is asynchronous and at-most-once.
  166. The channel can also be used to report peer errors, e.g. when receiving an invalid or malignant message. This may cause the peer to be disconnected or banned depending on the router's policy.
  167. A `Channel` has this interface:
  168. ```go
  169. // Channel is a bidirectional channel for Protobuf message exchange with peers.
  170. type Channel struct {
  171. // ID contains the channel ID.
  172. ID ChannelID
  173. // messageType specifies the type of messages exchanged via the channel, and
  174. // is used e.g. for automatic unmarshaling.
  175. messageType proto.Message
  176. // In is a channel for receiving inbound messages. Envelope.From is always
  177. // set.
  178. In <-chan Envelope
  179. // Out is a channel for sending outbound messages. Envelope.To or Broadcast
  180. // must be set, otherwise the message is discarded.
  181. Out chan<- Envelope
  182. // Error is a channel for reporting peer errors to the router, typically used
  183. // when peers send an invalid or malignant message.
  184. Error chan<- PeerError
  185. }
  186. // Close closes the channel, and is equivalent to close(Channel.Out). This will
  187. // cause Channel.In to be closed when appropriate. The ID can then be reused.
  188. func (c *Channel) Close() error { return nil }
  189. // ChannelID is an arbitrary channel ID.
  190. type ChannelID uint16
  191. // Envelope specifies the message receiver and sender.
  192. type Envelope struct {
  193. From PeerID // Message sender, or empty for outbound messages.
  194. To PeerID // Message receiver, or empty for inbound messages.
  195. Broadcast bool // Send message to all connected peers, ignoring To.
  196. Message proto.Message // Payload.
  197. }
  198. // PeerError is a peer error reported by a reactor via the Error channel. The
  199. // severity may cause the peer to be disconnected or banned depending on policy.
  200. type PeerError struct {
  201. PeerID PeerID
  202. Err error
  203. Severity PeerErrorSeverity
  204. }
  205. // PeerErrorSeverity determines the severity of a peer error.
  206. type PeerErrorSeverity string
  207. const (
  208. PeerErrorSeverityLow PeerErrorSeverity = "low" // Mostly ignored.
  209. PeerErrorSeverityHigh PeerErrorSeverity = "high" // May disconnect.
  210. PeerErrorSeverityCritical PeerErrorSeverity = "critical" // Ban.
  211. )
  212. ```
  213. A channel can reach any connected peer, and is implemented using transport streams against each individual peer, with an initial handshake to exchange the channel ID and any other metadata. The channel will automatically (un)marshal Protobuf to byte slices and use length-prefixed framing (the de facto standard for Protobuf streams) when writing them to the stream.
  214. Message scheduling and queueing is left as an implementation detail, and can use any number of algorithms such as FIFO, round-robin, priority queues, etc. Since message delivery is not guaranteed, both inbound and outbound messages may be dropped, buffered, or blocked as appropriate.
  215. Since a channel can only exchange messages of a single type, it is often useful to use a wrapper message type with e.g. a Protobuf `oneof` field that specifies a set of inner message types that it can contain. The channel can automatically perform this (un)wrapping if the outer message type implements the `Wrapper` interface (see [Reactor Example](#reactor-example) for an example):
  216. ```go
  217. // Wrapper is a Protobuf message that can contain a variety of inner messages.
  218. // If a Channel's message type implements Wrapper, the channel will
  219. // automatically (un)wrap passed messages using the container type, such that
  220. // the channel can transparently support multiple message types.
  221. type Wrapper interface {
  222. // Wrap will take a message and wrap it in this one.
  223. Wrap(proto.Message) error
  224. // Unwrap will unwrap the inner message contained in this message.
  225. Unwrap() (proto.Message, error)
  226. }
  227. ```
  228. ### Routers
  229. The router manages all P2P networking for a node, and is responsible for keeping track of network peers, maintaining transport connections, and routing channel messages. As such, it must do e.g. connection retries and backoff, message QoS scheduling and backpressure, peer quality assessments, and endpoint detection and advertisement. In addition, the router provides a mechanism to subscribe to peer updates (e.g. peers connecting or disconnecting), and handles reported peer errors from reactors.
  230. The implementation of the router is likely to be non-trivial, and is intentionally unspecified here. A separate ADR will likely be submitted for this. It is unclear whether message routing/scheduling and peer lifecycle management can be split into two separate components, or if these need to be tightly coupled.
  231. The `Router` API is as follows:
  232. ```go
  233. // Router manages connections to peers and routes Protobuf messages between them
  234. // and local reactors. It also provides peer status updates and error reporting.
  235. type Router struct{}
  236. // NewRouter creates a new router, using the given peer store to track peers.
  237. // Transports must be pre-initialized to listen on appropriate endpoints.
  238. func NewRouter(peerStore *peerStore, transports map[Protocol]Transport) *Router { return nil }
  239. // Channel opens a new channel with the given ID. messageType should be an empty
  240. // Protobuf message of the type that will be passed through the channel. The
  241. // message can implement Wrapper for automatic message (un)wrapping.
  242. func (r *Router) Channel(id ChannelID, messageType proto.Message) (*Channel, error) { return nil, nil }
  243. // PeerUpdates returns a channel with peer updates. The caller must cancel the
  244. // context to end the subscription, and keep consuming messages in a timely
  245. // fashion until the channel is closed to avoid blocking updates.
  246. func (r *Router) PeerUpdates(ctx context.Context) PeerUpdates { return nil }
  247. // PeerUpdates is a channel for receiving peer updates.
  248. type PeerUpdates <-chan PeerUpdate
  249. // PeerUpdate is a peer status update for reactors.
  250. type PeerUpdate struct {
  251. PeerID PeerID
  252. Status PeerStatus
  253. }
  254. ```
  255. ### Reactor Example
  256. While reactors are a first-class concept in the current P2P stack (i.e. there is an explicit `p2p.Reactor` interface), they will simply be a design pattern in the new stack, loosely defined as "something which listens on a channel and reacts to messages".
  257. Since reactors have very few formal constraints, they can be implemented in a variety of ways. There is currently no recommended pattern for implementing reactors, to avoid overspecification and scope creep in this ADR. However, prototyping and developing a reactor pattern should be done early during implementation, to make sure reactors built using the `Channel` interface can satisfy the needs for convenience, deterministic tests, and reliability.
  258. Below is a trivial example of a simple echo reactor implemented as a function. The reactor will exchange the following Protobuf messages:
  259. ```protobuf
  260. message EchoMessage {
  261. oneof inner {
  262. PingMessage ping = 1;
  263. PongMessage pong = 2;
  264. }
  265. }
  266. message PingMessage {
  267. string content = 1;
  268. }
  269. message PongMessage {
  270. string content = 1;
  271. }
  272. ```
  273. Implementing the `Wrapper` interface for `EchoMessage` allows transparently passing `PingMessage` and `PongMessage` through the channel, where it will automatically be (un)wrapped in an `EchoMessage`:
  274. ```go
  275. func (m *EchoMessage) Wrap(inner proto.Message) error {
  276. switch inner := inner.(type) {
  277. case *PingMessage:
  278. m.Inner = &EchoMessage_PingMessage{Ping: inner}
  279. case *PongMessage:
  280. m.Inner = &EchoMessage_PongMessage{Pong: inner}
  281. default:
  282. return fmt.Errorf("unknown message %T", inner)
  283. }
  284. return nil
  285. }
  286. func (m *EchoMessage) Unwrap() (proto.Message, error) {
  287. switch inner := m.Inner.(type) {
  288. case *EchoMessage_PingMessage:
  289. return inner.Ping, nil
  290. case *EchoMessage_PongMessage:
  291. return inner.Pong, nil
  292. default:
  293. return nil, fmt.Errorf("unknown message %T", inner)
  294. }
  295. }
  296. ```
  297. The reactor itself would be implemented e.g. like this:
  298. ```go
  299. // RunEchoReactor wires up an echo reactor to a router and runs it.
  300. func RunEchoReactor(router *p2p.Router) error {
  301. ctx, cancel := context.WithCancel(context.Background())
  302. defer cancel()
  303. channel, err := router.Channel(1, &EchoMessage{})
  304. if err != nil {
  305. return err
  306. }
  307. defer channel.Close()
  308. return EchoReactor(ctx, channel, router.PeerUpdates(ctx))
  309. }
  310. // EchoReactor provides an echo service, pinging all known peers until cancelled.
  311. func EchoReactor(ctx context.Context, channel *p2p.Channel, peerUpdates p2p.PeerUpdates) error {
  312. ticker := time.NewTicker(5 * time.Second)
  313. defer ticker.Stop()
  314. for {
  315. select {
  316. // Send ping message to all known peers every 5 seconds.
  317. case <-ticker.C:
  318. channel.Out <- Envelope{
  319. Broadcast: true,
  320. Message: &PingMessage{Content: "👋"},
  321. }
  322. // When we receive a message from a peer, either respond to ping, output
  323. // pong, or report peer error on unknown message type.
  324. case envelope := <-channel.In:
  325. switch msg := envelope.Message.(type) {
  326. case *PingMessage:
  327. channel.Out <- Envelope{
  328. To: envelope.From,
  329. Message: &PongMessage{Content: msg.Content},
  330. }
  331. case *PongMessage:
  332. fmt.Printf("%q replied with %q\n", envelope.From, msg.Content)
  333. default:
  334. channel.Error <- PeerError{
  335. PeerID: envelope.From,
  336. Err: fmt.Errorf("unexpected message %T", msg),
  337. Severity: PeerErrorSeverityLow,
  338. }
  339. }
  340. // Output info about any peer status changes.
  341. case peerUpdate := <-peerUpdates:
  342. fmt.Printf("Peer %q changed status to %q", peerUpdate.PeerID, peerUpdate.Status)
  343. // Exit when context is cancelled.
  344. case <-ctx.Done():
  345. return nil
  346. }
  347. }
  348. }
  349. ```
  350. ### Implementation Plan
  351. The existing P2P stack should be gradually migrated towards this design. The easiest path would likely be:
  352. 1. Implement the `Channel` and `PeerUpdates` APIs as shims on top of the current `Switch` and `Peer` APIs, and rewrite all reactors to use them instead.
  353. 2. Port the `privval` package to no longer use `SecretConnection` (e.g. by using gRPC instead), or temporarily duplicate its functionality.
  354. 3. Rewrite the current MConn connection and transport code to use the new `Transport` API, and migrate existing code to use it instead.
  355. 4. Implement the new `peer` and `peerStore` APIs, and either make the current address book a shim on top of these or replace it.
  356. 5. Replace the existing `Switch` abstraction with the new `Router`.
  357. 6. Move the PEX reactor and other address advertisement/exchange into the P2P core, possibly the `Router`.
  358. 7. Consider rewriting and/or cleaning up reactors and other P2P-related code to make better use of the new abstractions.
  359. A note on backwards-compatibility: the current MConn protocol takes whole messages expressed as byte slices and splits them up into `PacketMsg` messages, where the final packet of a message has `PacketMsg.EOF` set. In order to maintain wire-compatibility with this protocol, the MConn transport needs to be aware of message boundaries, even though it does not care what the messages actually are. One way to handle this is to break abstraction boundaries and have the transport decode the input's length-prefixed message framing and use this to determine message boundaries, unless we accept breaking the protocol here.
  360. Similarly, implementing channel handshakes with the current MConn protocol would require doing an initial connection handshake as today and use that information to "fake" the local channel handshake without it hitting the wire.
  361. ## Status
  362. Accepted
  363. ## Consequences
  364. ### Positive
  365. * Reduced coupling and simplified interfaces should lead to better understandability, increased reliability, and more testing.
  366. * Using message passing via Go channels gives better control of backpressure and quality-of-service scheduling.
  367. * Peer lifecycle and connection management is centralized in a single entity, making it easier to reason about.
  368. * Detection, advertisement, and exchange of node addresses will be improved.
  369. * Additional transports (e.g. QUIC) can be implemented and used in parallel with the existing MConn protocol.
  370. * The P2P protocol will not be broken in the initial version, if possible.
  371. ### Negative
  372. * Fully implementing the new design as indended is likely to require breaking changes to the P2P protocol at some point, although the initial implementation shouldn't.
  373. * Gradually migrating the existing stack and maintaining backwards-compatibility will be more labor-intensive than simply replacing the entire stack.
  374. * A complete overhaul of P2P internals is likely to cause temporary performance regressions and bugs as the implementation matures.
  375. * Hiding peer management information inside the `p2p` package may prevent certain functionality or require additional deliberate interfaces for information exchange, as a tradeoff to simplify the design, reduce coupling, and avoid race conditions and lock contention.
  376. ### Neutral
  377. * Implementation details around e.g. peer management, message scheduling, and peer and endpoint advertisement are not yet determined.
  378. ## References
  379. * [ADR 061: P2P Refactor Scope](adr-061-p2p-refactor-scope.md)
  380. * [#5670 p2p: internal refactor and architecture redesign](https://github.com/tendermint/tendermint/issues/5670)