You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

785 lines
21 KiB

p2p: implement new Transport interface (#5791) This implements a new `Transport` interface and related types for the P2P refactor in #5670. Previously, `conn.MConnection` was very tightly coupled to the `Peer` implementation -- in order to allow alternative non-multiplexed transports (e.g. QUIC), MConnection has now been moved below the `Transport` interface, as `MConnTransport`, and decoupled from the peer. Since the `p2p` package is not covered by our Go API stability, this is not considered a breaking change, and not listed in the changelog. The initial approach was to implement the new interface in its final form (which also involved possible protocol changes, see https://github.com/tendermint/spec/pull/227). However, it turned out that this would require a large amount of changes to existing P2P code because of the previous tight coupling between `Peer` and `MConnection` and the reliance on subtleties in the MConnection behavior. Instead, I have broadened the `Transport` interface to expose much of the existing MConnection interface, preserved much of the existing MConnection logic and behavior in the transport implementation, and tried to make as few changes to the rest of the P2P stack as possible. We will instead reduce this interface gradually as we refactor other parts of the P2P stack. The low-level transport code and protocol (e.g. MConnection, SecretConnection and so on) has not been significantly changed, and refactoring this is not a priority until we come up with a plan for QUIC adoption, as we may end up discarding the MConnection code entirely. There are no tests of the new `MConnTransport`, as this code is likely to evolve as we proceed with the P2P refactor, but tests should be added before a final release. The E2E tests are sufficient for basic validation in the meanwhile.
4 years ago
p2p: implement new Transport interface (#5791) This implements a new `Transport` interface and related types for the P2P refactor in #5670. Previously, `conn.MConnection` was very tightly coupled to the `Peer` implementation -- in order to allow alternative non-multiplexed transports (e.g. QUIC), MConnection has now been moved below the `Transport` interface, as `MConnTransport`, and decoupled from the peer. Since the `p2p` package is not covered by our Go API stability, this is not considered a breaking change, and not listed in the changelog. The initial approach was to implement the new interface in its final form (which also involved possible protocol changes, see https://github.com/tendermint/spec/pull/227). However, it turned out that this would require a large amount of changes to existing P2P code because of the previous tight coupling between `Peer` and `MConnection` and the reliance on subtleties in the MConnection behavior. Instead, I have broadened the `Transport` interface to expose much of the existing MConnection interface, preserved much of the existing MConnection logic and behavior in the transport implementation, and tried to make as few changes to the rest of the P2P stack as possible. We will instead reduce this interface gradually as we refactor other parts of the P2P stack. The low-level transport code and protocol (e.g. MConnection, SecretConnection and so on) has not been significantly changed, and refactoring this is not a priority until we come up with a plan for QUIC adoption, as we may end up discarding the MConnection code entirely. There are no tests of the new `MConnTransport`, as this code is likely to evolve as we proceed with the P2P refactor, but tests should be added before a final release. The E2E tests are sufficient for basic validation in the meanwhile.
4 years ago
  1. package conn
  2. import (
  3. "bufio"
  4. "context"
  5. "errors"
  6. "fmt"
  7. "io"
  8. "math"
  9. "net"
  10. "reflect"
  11. "runtime/debug"
  12. "sync/atomic"
  13. "time"
  14. "github.com/gogo/protobuf/proto"
  15. "github.com/tendermint/tendermint/internal/libs/flowrate"
  16. "github.com/tendermint/tendermint/internal/libs/protoio"
  17. tmsync "github.com/tendermint/tendermint/internal/libs/sync"
  18. "github.com/tendermint/tendermint/internal/libs/timer"
  19. "github.com/tendermint/tendermint/libs/log"
  20. tmmath "github.com/tendermint/tendermint/libs/math"
  21. "github.com/tendermint/tendermint/libs/service"
  22. tmp2p "github.com/tendermint/tendermint/proto/tendermint/p2p"
  23. )
  24. const (
  25. // mirrors MaxPacketMsgPayloadSize from config/config.go
  26. defaultMaxPacketMsgPayloadSize = 1400
  27. numBatchPacketMsgs = 10
  28. minReadBufferSize = 1024
  29. minWriteBufferSize = 65536
  30. updateStats = 2 * time.Second
  31. // some of these defaults are written in the user config
  32. // flushThrottle, sendRate, recvRate
  33. // TODO: remove values present in config
  34. defaultFlushThrottle = 100 * time.Millisecond
  35. defaultSendQueueCapacity = 1
  36. defaultRecvBufferCapacity = 4096
  37. defaultRecvMessageCapacity = 22020096 // 21MB
  38. defaultSendRate = int64(512000) // 500KB/s
  39. defaultRecvRate = int64(512000) // 500KB/s
  40. defaultSendTimeout = 10 * time.Second
  41. defaultPingInterval = 60 * time.Second
  42. defaultPongTimeout = 45 * time.Second
  43. )
  44. type receiveCbFunc func(chID ChannelID, msgBytes []byte)
  45. type errorCbFunc func(interface{})
  46. /*
  47. Each peer has one `MConnection` (multiplex connection) instance.
  48. __multiplex__ *noun* a system or signal involving simultaneous transmission of
  49. several messages along a single channel of communication.
  50. Each `MConnection` handles message transmission on multiple abstract communication
  51. `Channel`s. Each channel has a globally unique byte id.
  52. The byte id and the relative priorities of each `Channel` are configured upon
  53. initialization of the connection.
  54. There are two methods for sending messages:
  55. func (m MConnection) Send(chID byte, msgBytes []byte) bool {}
  56. `Send(chID, msgBytes)` is a blocking call that waits until `msg` is
  57. successfully queued for the channel with the given id byte `chID`, or until the
  58. request times out. The message `msg` is serialized using Protobuf.
  59. Inbound message bytes are handled with an onReceive callback function.
  60. */
  61. type MConnection struct {
  62. service.BaseService
  63. conn net.Conn
  64. bufConnReader *bufio.Reader
  65. bufConnWriter *bufio.Writer
  66. sendMonitor *flowrate.Monitor
  67. recvMonitor *flowrate.Monitor
  68. send chan struct{}
  69. pong chan struct{}
  70. channels []*channel
  71. channelsIdx map[ChannelID]*channel
  72. onReceive receiveCbFunc
  73. onError errorCbFunc
  74. errored uint32
  75. config MConnConfig
  76. // Closing quitSendRoutine will cause the sendRoutine to eventually quit.
  77. // doneSendRoutine is closed when the sendRoutine actually quits.
  78. quitSendRoutine chan struct{}
  79. doneSendRoutine chan struct{}
  80. // Closing quitRecvRouting will cause the recvRouting to eventually quit.
  81. quitRecvRoutine chan struct{}
  82. // used to ensure FlushStop and OnStop
  83. // are safe to call concurrently.
  84. stopMtx tmsync.Mutex
  85. flushTimer *timer.ThrottleTimer // flush writes as necessary but throttled.
  86. pingTimer *time.Ticker // send pings periodically
  87. // close conn if pong is not received in pongTimeout
  88. pongTimer *time.Timer
  89. pongTimeoutCh chan bool // true - timeout, false - peer sent pong
  90. chStatsTimer *time.Ticker // update channel stats periodically
  91. created time.Time // time of creation
  92. _maxPacketMsgSize int
  93. }
  94. // MConnConfig is a MConnection configuration.
  95. type MConnConfig struct {
  96. SendRate int64 `mapstructure:"send_rate"`
  97. RecvRate int64 `mapstructure:"recv_rate"`
  98. // Maximum payload size
  99. MaxPacketMsgPayloadSize int `mapstructure:"max_packet_msg_payload_size"`
  100. // Interval to flush writes (throttled)
  101. FlushThrottle time.Duration `mapstructure:"flush_throttle"`
  102. // Interval to send pings
  103. PingInterval time.Duration `mapstructure:"ping_interval"`
  104. // Maximum wait time for pongs
  105. PongTimeout time.Duration `mapstructure:"pong_timeout"`
  106. }
  107. // DefaultMConnConfig returns the default config.
  108. func DefaultMConnConfig() MConnConfig {
  109. return MConnConfig{
  110. SendRate: defaultSendRate,
  111. RecvRate: defaultRecvRate,
  112. MaxPacketMsgPayloadSize: defaultMaxPacketMsgPayloadSize,
  113. FlushThrottle: defaultFlushThrottle,
  114. PingInterval: defaultPingInterval,
  115. PongTimeout: defaultPongTimeout,
  116. }
  117. }
  118. // NewMConnection wraps net.Conn and creates multiplex connection
  119. func NewMConnection(
  120. logger log.Logger,
  121. conn net.Conn,
  122. chDescs []*ChannelDescriptor,
  123. onReceive receiveCbFunc,
  124. onError errorCbFunc,
  125. ) *MConnection {
  126. return NewMConnectionWithConfig(
  127. logger,
  128. conn,
  129. chDescs,
  130. onReceive,
  131. onError,
  132. DefaultMConnConfig())
  133. }
  134. // NewMConnectionWithConfig wraps net.Conn and creates multiplex connection with a config
  135. func NewMConnectionWithConfig(
  136. logger log.Logger,
  137. conn net.Conn,
  138. chDescs []*ChannelDescriptor,
  139. onReceive receiveCbFunc,
  140. onError errorCbFunc,
  141. config MConnConfig,
  142. ) *MConnection {
  143. if config.PongTimeout >= config.PingInterval {
  144. panic("pongTimeout must be less than pingInterval (otherwise, next ping will reset pong timer)")
  145. }
  146. mconn := &MConnection{
  147. conn: conn,
  148. bufConnReader: bufio.NewReaderSize(conn, minReadBufferSize),
  149. bufConnWriter: bufio.NewWriterSize(conn, minWriteBufferSize),
  150. sendMonitor: flowrate.New(0, 0),
  151. recvMonitor: flowrate.New(0, 0),
  152. send: make(chan struct{}, 1),
  153. pong: make(chan struct{}, 1),
  154. onReceive: onReceive,
  155. onError: onError,
  156. config: config,
  157. created: time.Now(),
  158. }
  159. mconn.BaseService = *service.NewBaseService(logger, "MConnection", mconn)
  160. // Create channels
  161. var channelsIdx = map[ChannelID]*channel{}
  162. var channels = []*channel{}
  163. for _, desc := range chDescs {
  164. channel := newChannel(mconn, *desc)
  165. channelsIdx[channel.desc.ID] = channel
  166. channels = append(channels, channel)
  167. }
  168. mconn.channels = channels
  169. mconn.channelsIdx = channelsIdx
  170. // maxPacketMsgSize() is a bit heavy, so call just once
  171. mconn._maxPacketMsgSize = mconn.maxPacketMsgSize()
  172. return mconn
  173. }
  174. // OnStart implements BaseService
  175. func (c *MConnection) OnStart(ctx context.Context) error {
  176. if err := c.BaseService.OnStart(ctx); err != nil {
  177. return err
  178. }
  179. c.flushTimer = timer.NewThrottleTimer("flush", c.config.FlushThrottle)
  180. c.pingTimer = time.NewTicker(c.config.PingInterval)
  181. c.pongTimeoutCh = make(chan bool, 1)
  182. c.chStatsTimer = time.NewTicker(updateStats)
  183. c.quitSendRoutine = make(chan struct{})
  184. c.doneSendRoutine = make(chan struct{})
  185. c.quitRecvRoutine = make(chan struct{})
  186. go c.sendRoutine()
  187. go c.recvRoutine()
  188. return nil
  189. }
  190. // stopServices stops the BaseService and timers and closes the quitSendRoutine.
  191. // if the quitSendRoutine was already closed, it returns true, otherwise it returns false.
  192. // It uses the stopMtx to ensure only one of FlushStop and OnStop can do this at a time.
  193. func (c *MConnection) stopServices() (alreadyStopped bool) {
  194. c.stopMtx.Lock()
  195. defer c.stopMtx.Unlock()
  196. select {
  197. case <-c.quitSendRoutine:
  198. // already quit
  199. return true
  200. default:
  201. }
  202. select {
  203. case <-c.quitRecvRoutine:
  204. // already quit
  205. return true
  206. default:
  207. }
  208. c.BaseService.OnStop()
  209. c.flushTimer.Stop()
  210. c.pingTimer.Stop()
  211. c.chStatsTimer.Stop()
  212. // inform the recvRouting that we are shutting down
  213. close(c.quitRecvRoutine)
  214. close(c.quitSendRoutine)
  215. return false
  216. }
  217. // OnStop implements BaseService
  218. func (c *MConnection) OnStop() {
  219. if c.stopServices() {
  220. return
  221. }
  222. c.conn.Close()
  223. // We can't close pong safely here because
  224. // recvRoutine may write to it after we've stopped.
  225. // Though it doesn't need to get closed at all,
  226. // we close it @ recvRoutine.
  227. }
  228. func (c *MConnection) String() string {
  229. return fmt.Sprintf("MConn{%v}", c.conn.RemoteAddr())
  230. }
  231. func (c *MConnection) flush() {
  232. c.Logger.Debug("Flush", "conn", c)
  233. err := c.bufConnWriter.Flush()
  234. if err != nil {
  235. c.Logger.Debug("MConnection flush failed", "err", err)
  236. }
  237. }
  238. // Catch panics, usually caused by remote disconnects.
  239. func (c *MConnection) _recover() {
  240. if r := recover(); r != nil {
  241. c.Logger.Error("MConnection panicked", "err", r, "stack", string(debug.Stack()))
  242. c.stopForError(fmt.Errorf("recovered from panic: %v", r))
  243. }
  244. }
  245. func (c *MConnection) stopForError(r interface{}) {
  246. if err := c.Stop(); err != nil {
  247. c.Logger.Error("Error stopping connection", "err", err)
  248. }
  249. if atomic.CompareAndSwapUint32(&c.errored, 0, 1) {
  250. if c.onError != nil {
  251. c.onError(r)
  252. }
  253. }
  254. }
  255. // Queues a message to be sent to channel.
  256. func (c *MConnection) Send(chID ChannelID, msgBytes []byte) bool {
  257. if !c.IsRunning() {
  258. return false
  259. }
  260. c.Logger.Debug("Send", "channel", chID, "conn", c, "msgBytes", msgBytes)
  261. // Send message to channel.
  262. channel, ok := c.channelsIdx[chID]
  263. if !ok {
  264. c.Logger.Error(fmt.Sprintf("Cannot send bytes, unknown channel %X", chID))
  265. return false
  266. }
  267. success := channel.sendBytes(msgBytes)
  268. if success {
  269. // Wake up sendRoutine if necessary
  270. select {
  271. case c.send <- struct{}{}:
  272. default:
  273. }
  274. } else {
  275. c.Logger.Debug("Send failed", "channel", chID, "conn", c, "msgBytes", msgBytes)
  276. }
  277. return success
  278. }
  279. // sendRoutine polls for packets to send from channels.
  280. func (c *MConnection) sendRoutine() {
  281. defer c._recover()
  282. protoWriter := protoio.NewDelimitedWriter(c.bufConnWriter)
  283. FOR_LOOP:
  284. for {
  285. var _n int
  286. var err error
  287. SELECTION:
  288. select {
  289. case <-c.flushTimer.Ch:
  290. // NOTE: flushTimer.Set() must be called every time
  291. // something is written to .bufConnWriter.
  292. c.flush()
  293. case <-c.chStatsTimer.C:
  294. for _, channel := range c.channels {
  295. channel.updateStats()
  296. }
  297. case <-c.pingTimer.C:
  298. c.Logger.Debug("Send Ping")
  299. _n, err = protoWriter.WriteMsg(mustWrapPacket(&tmp2p.PacketPing{}))
  300. if err != nil {
  301. c.Logger.Error("Failed to send PacketPing", "err", err)
  302. break SELECTION
  303. }
  304. c.sendMonitor.Update(_n)
  305. c.Logger.Debug("Starting pong timer", "dur", c.config.PongTimeout)
  306. c.pongTimer = time.AfterFunc(c.config.PongTimeout, func() {
  307. select {
  308. case c.pongTimeoutCh <- true:
  309. default:
  310. }
  311. })
  312. c.flush()
  313. case timeout := <-c.pongTimeoutCh:
  314. if timeout {
  315. c.Logger.Debug("Pong timeout")
  316. err = errors.New("pong timeout")
  317. } else {
  318. c.stopPongTimer()
  319. }
  320. case <-c.pong:
  321. c.Logger.Debug("Send Pong")
  322. _n, err = protoWriter.WriteMsg(mustWrapPacket(&tmp2p.PacketPong{}))
  323. if err != nil {
  324. c.Logger.Error("Failed to send PacketPong", "err", err)
  325. break SELECTION
  326. }
  327. c.sendMonitor.Update(_n)
  328. c.flush()
  329. case <-c.quitSendRoutine:
  330. break FOR_LOOP
  331. case <-c.send:
  332. // Send some PacketMsgs
  333. eof := c.sendSomePacketMsgs()
  334. if !eof {
  335. // Keep sendRoutine awake.
  336. select {
  337. case c.send <- struct{}{}:
  338. default:
  339. }
  340. }
  341. }
  342. if !c.IsRunning() {
  343. break FOR_LOOP
  344. }
  345. if err != nil {
  346. c.Logger.Error("Connection failed @ sendRoutine", "conn", c, "err", err)
  347. c.stopForError(err)
  348. break FOR_LOOP
  349. }
  350. }
  351. // Cleanup
  352. c.stopPongTimer()
  353. close(c.doneSendRoutine)
  354. }
  355. // Returns true if messages from channels were exhausted.
  356. // Blocks in accordance to .sendMonitor throttling.
  357. func (c *MConnection) sendSomePacketMsgs() bool {
  358. // Block until .sendMonitor says we can write.
  359. // Once we're ready we send more than we asked for,
  360. // but amortized it should even out.
  361. c.sendMonitor.Limit(c._maxPacketMsgSize, atomic.LoadInt64(&c.config.SendRate), true)
  362. // Now send some PacketMsgs.
  363. for i := 0; i < numBatchPacketMsgs; i++ {
  364. if c.sendPacketMsg() {
  365. return true
  366. }
  367. }
  368. return false
  369. }
  370. // Returns true if messages from channels were exhausted.
  371. func (c *MConnection) sendPacketMsg() bool {
  372. // Choose a channel to create a PacketMsg from.
  373. // The chosen channel will be the one whose recentlySent/priority is the least.
  374. var leastRatio float32 = math.MaxFloat32
  375. var leastChannel *channel
  376. for _, channel := range c.channels {
  377. // If nothing to send, skip this channel
  378. if !channel.isSendPending() {
  379. continue
  380. }
  381. // Get ratio, and keep track of lowest ratio.
  382. ratio := float32(channel.recentlySent) / float32(channel.desc.Priority)
  383. if ratio < leastRatio {
  384. leastRatio = ratio
  385. leastChannel = channel
  386. }
  387. }
  388. // Nothing to send?
  389. if leastChannel == nil {
  390. return true
  391. }
  392. // c.Logger.Info("Found a msgPacket to send")
  393. // Make & send a PacketMsg from this channel
  394. _n, err := leastChannel.writePacketMsgTo(c.bufConnWriter)
  395. if err != nil {
  396. c.Logger.Error("Failed to write PacketMsg", "err", err)
  397. c.stopForError(err)
  398. return true
  399. }
  400. c.sendMonitor.Update(_n)
  401. c.flushTimer.Set()
  402. return false
  403. }
  404. // recvRoutine reads PacketMsgs and reconstructs the message using the channels' "recving" buffer.
  405. // After a whole message has been assembled, it's pushed to onReceive().
  406. // Blocks depending on how the connection is throttled.
  407. // Otherwise, it never blocks.
  408. func (c *MConnection) recvRoutine() {
  409. defer c._recover()
  410. protoReader := protoio.NewDelimitedReader(c.bufConnReader, c._maxPacketMsgSize)
  411. FOR_LOOP:
  412. for {
  413. // Block until .recvMonitor says we can read.
  414. c.recvMonitor.Limit(c._maxPacketMsgSize, atomic.LoadInt64(&c.config.RecvRate), true)
  415. // Peek into bufConnReader for debugging
  416. /*
  417. if numBytes := c.bufConnReader.Buffered(); numBytes > 0 {
  418. bz, err := c.bufConnReader.Peek(tmmath.MinInt(numBytes, 100))
  419. if err == nil {
  420. // return
  421. } else {
  422. c.Logger.Debug("Error peeking connection buffer", "err", err)
  423. // return nil
  424. }
  425. c.Logger.Info("Peek connection buffer", "numBytes", numBytes, "bz", bz)
  426. }
  427. */
  428. // Read packet type
  429. var packet tmp2p.Packet
  430. _n, err := protoReader.ReadMsg(&packet)
  431. c.recvMonitor.Update(_n)
  432. if err != nil {
  433. // stopServices was invoked and we are shutting down
  434. // receiving is excpected to fail since we will close the connection
  435. select {
  436. case <-c.quitRecvRoutine:
  437. break FOR_LOOP
  438. default:
  439. }
  440. if c.IsRunning() {
  441. if err == io.EOF {
  442. c.Logger.Info("Connection is closed @ recvRoutine (likely by the other side)", "conn", c)
  443. } else {
  444. c.Logger.Debug("Connection failed @ recvRoutine (reading byte)", "conn", c, "err", err)
  445. }
  446. c.stopForError(err)
  447. }
  448. break FOR_LOOP
  449. }
  450. // Read more depending on packet type.
  451. switch pkt := packet.Sum.(type) {
  452. case *tmp2p.Packet_PacketPing:
  453. // TODO: prevent abuse, as they cause flush()'s.
  454. // https://github.com/tendermint/tendermint/issues/1190
  455. c.Logger.Debug("Receive Ping")
  456. select {
  457. case c.pong <- struct{}{}:
  458. default:
  459. // never block
  460. }
  461. case *tmp2p.Packet_PacketPong:
  462. c.Logger.Debug("Receive Pong")
  463. select {
  464. case c.pongTimeoutCh <- false:
  465. default:
  466. // never block
  467. }
  468. case *tmp2p.Packet_PacketMsg:
  469. channelID := ChannelID(pkt.PacketMsg.ChannelID)
  470. channel, ok := c.channelsIdx[channelID]
  471. if pkt.PacketMsg.ChannelID < 0 || pkt.PacketMsg.ChannelID > math.MaxUint8 || !ok || channel == nil {
  472. err := fmt.Errorf("unknown channel %X", pkt.PacketMsg.ChannelID)
  473. c.Logger.Debug("Connection failed @ recvRoutine", "conn", c, "err", err)
  474. c.stopForError(err)
  475. break FOR_LOOP
  476. }
  477. msgBytes, err := channel.recvPacketMsg(*pkt.PacketMsg)
  478. if err != nil {
  479. if c.IsRunning() {
  480. c.Logger.Debug("Connection failed @ recvRoutine", "conn", c, "err", err)
  481. c.stopForError(err)
  482. }
  483. break FOR_LOOP
  484. }
  485. if msgBytes != nil {
  486. c.Logger.Debug("Received bytes", "chID", channelID, "msgBytes", msgBytes)
  487. // NOTE: This means the reactor.Receive runs in the same thread as the p2p recv routine
  488. c.onReceive(channelID, msgBytes)
  489. }
  490. default:
  491. err := fmt.Errorf("unknown message type %v", reflect.TypeOf(packet))
  492. c.Logger.Error("Connection failed @ recvRoutine", "conn", c, "err", err)
  493. c.stopForError(err)
  494. break FOR_LOOP
  495. }
  496. }
  497. // Cleanup
  498. close(c.pong)
  499. for range c.pong {
  500. // Drain
  501. }
  502. }
  503. // not goroutine-safe
  504. func (c *MConnection) stopPongTimer() {
  505. if c.pongTimer != nil {
  506. _ = c.pongTimer.Stop()
  507. c.pongTimer = nil
  508. }
  509. }
  510. // maxPacketMsgSize returns a maximum size of PacketMsg
  511. func (c *MConnection) maxPacketMsgSize() int {
  512. bz, err := proto.Marshal(mustWrapPacket(&tmp2p.PacketMsg{
  513. ChannelID: 0x01,
  514. EOF: true,
  515. Data: make([]byte, c.config.MaxPacketMsgPayloadSize),
  516. }))
  517. if err != nil {
  518. panic(err)
  519. }
  520. return len(bz)
  521. }
  522. type ChannelStatus struct {
  523. ID byte
  524. SendQueueCapacity int
  525. SendQueueSize int
  526. Priority int
  527. RecentlySent int64
  528. }
  529. //-----------------------------------------------------------------------------
  530. // ChannelID is an arbitrary channel ID.
  531. type ChannelID uint16
  532. type ChannelDescriptor struct {
  533. ID ChannelID
  534. Priority int
  535. MessageType proto.Message
  536. // TODO: Remove once p2p refactor is complete.
  537. SendQueueCapacity int
  538. RecvMessageCapacity int
  539. // RecvBufferCapacity defines the max buffer size of inbound messages for a
  540. // given p2p Channel queue.
  541. RecvBufferCapacity int
  542. }
  543. func (chDesc ChannelDescriptor) FillDefaults() (filled ChannelDescriptor) {
  544. if chDesc.SendQueueCapacity == 0 {
  545. chDesc.SendQueueCapacity = defaultSendQueueCapacity
  546. }
  547. if chDesc.RecvBufferCapacity == 0 {
  548. chDesc.RecvBufferCapacity = defaultRecvBufferCapacity
  549. }
  550. if chDesc.RecvMessageCapacity == 0 {
  551. chDesc.RecvMessageCapacity = defaultRecvMessageCapacity
  552. }
  553. filled = chDesc
  554. return
  555. }
  556. // NOTE: not goroutine-safe.
  557. type channel struct {
  558. // Exponential moving average.
  559. // This field must be accessed atomically.
  560. // It is first in the struct to ensure correct alignment.
  561. // See https://github.com/tendermint/tendermint/issues/7000.
  562. recentlySent int64
  563. conn *MConnection
  564. desc ChannelDescriptor
  565. sendQueue chan []byte
  566. sendQueueSize int32 // atomic.
  567. recving []byte
  568. sending []byte
  569. maxPacketMsgPayloadSize int
  570. Logger log.Logger
  571. }
  572. func newChannel(conn *MConnection, desc ChannelDescriptor) *channel {
  573. desc = desc.FillDefaults()
  574. if desc.Priority <= 0 {
  575. panic("Channel default priority must be a positive integer")
  576. }
  577. return &channel{
  578. conn: conn,
  579. desc: desc,
  580. sendQueue: make(chan []byte, desc.SendQueueCapacity),
  581. recving: make([]byte, 0, desc.RecvBufferCapacity),
  582. maxPacketMsgPayloadSize: conn.config.MaxPacketMsgPayloadSize,
  583. Logger: conn.Logger,
  584. }
  585. }
  586. // Queues message to send to this channel.
  587. // Goroutine-safe
  588. // Times out (and returns false) after defaultSendTimeout
  589. func (ch *channel) sendBytes(bytes []byte) bool {
  590. select {
  591. case ch.sendQueue <- bytes:
  592. atomic.AddInt32(&ch.sendQueueSize, 1)
  593. return true
  594. case <-time.After(defaultSendTimeout):
  595. return false
  596. }
  597. }
  598. // Returns true if any PacketMsgs are pending to be sent.
  599. // Call before calling nextPacketMsg()
  600. // Goroutine-safe
  601. func (ch *channel) isSendPending() bool {
  602. if len(ch.sending) == 0 {
  603. if len(ch.sendQueue) == 0 {
  604. return false
  605. }
  606. ch.sending = <-ch.sendQueue
  607. }
  608. return true
  609. }
  610. // Creates a new PacketMsg to send.
  611. // Not goroutine-safe
  612. func (ch *channel) nextPacketMsg() tmp2p.PacketMsg {
  613. packet := tmp2p.PacketMsg{ChannelID: int32(ch.desc.ID)}
  614. maxSize := ch.maxPacketMsgPayloadSize
  615. packet.Data = ch.sending[:tmmath.MinInt(maxSize, len(ch.sending))]
  616. if len(ch.sending) <= maxSize {
  617. packet.EOF = true
  618. ch.sending = nil
  619. atomic.AddInt32(&ch.sendQueueSize, -1) // decrement sendQueueSize
  620. } else {
  621. packet.EOF = false
  622. ch.sending = ch.sending[tmmath.MinInt(maxSize, len(ch.sending)):]
  623. }
  624. return packet
  625. }
  626. // Writes next PacketMsg to w and updates c.recentlySent.
  627. // Not goroutine-safe
  628. func (ch *channel) writePacketMsgTo(w io.Writer) (n int, err error) {
  629. packet := ch.nextPacketMsg()
  630. n, err = protoio.NewDelimitedWriter(w).WriteMsg(mustWrapPacket(&packet))
  631. atomic.AddInt64(&ch.recentlySent, int64(n))
  632. return
  633. }
  634. // Handles incoming PacketMsgs. It returns a message bytes if message is
  635. // complete, which is owned by the caller and will not be modified.
  636. // Not goroutine-safe
  637. func (ch *channel) recvPacketMsg(packet tmp2p.PacketMsg) ([]byte, error) {
  638. ch.Logger.Debug("Read PacketMsg", "conn", ch.conn, "packet", packet)
  639. var recvCap, recvReceived = ch.desc.RecvMessageCapacity, len(ch.recving) + len(packet.Data)
  640. if recvCap < recvReceived {
  641. return nil, fmt.Errorf("received message exceeds available capacity: %v < %v", recvCap, recvReceived)
  642. }
  643. ch.recving = append(ch.recving, packet.Data...)
  644. if packet.EOF {
  645. msgBytes := ch.recving
  646. ch.recving = make([]byte, 0, ch.desc.RecvBufferCapacity)
  647. return msgBytes, nil
  648. }
  649. return nil, nil
  650. }
  651. // Call this periodically to update stats for throttling purposes.
  652. // Not goroutine-safe
  653. func (ch *channel) updateStats() {
  654. // Exponential decay of stats.
  655. // TODO: optimize.
  656. atomic.StoreInt64(&ch.recentlySent, int64(float64(atomic.LoadInt64(&ch.recentlySent))*0.8))
  657. }
  658. //----------------------------------------
  659. // Packet
  660. // mustWrapPacket takes a packet kind (oneof) and wraps it in a tmp2p.Packet message.
  661. func mustWrapPacket(pb proto.Message) *tmp2p.Packet {
  662. var msg tmp2p.Packet
  663. switch pb := pb.(type) {
  664. case *tmp2p.Packet: // already a packet
  665. msg = *pb
  666. case *tmp2p.PacketPing:
  667. msg = tmp2p.Packet{
  668. Sum: &tmp2p.Packet_PacketPing{
  669. PacketPing: pb,
  670. },
  671. }
  672. case *tmp2p.PacketPong:
  673. msg = tmp2p.Packet{
  674. Sum: &tmp2p.Packet_PacketPong{
  675. PacketPong: pb,
  676. },
  677. }
  678. case *tmp2p.PacketMsg:
  679. msg = tmp2p.Packet{
  680. Sum: &tmp2p.Packet_PacketMsg{
  681. PacketMsg: pb,
  682. },
  683. }
  684. default:
  685. panic(fmt.Errorf("unknown packet type %T", pb))
  686. }
  687. return &msg
  688. }