You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

770 lines
21 KiB

p2p: implement new Transport interface (#5791) This implements a new `Transport` interface and related types for the P2P refactor in #5670. Previously, `conn.MConnection` was very tightly coupled to the `Peer` implementation -- in order to allow alternative non-multiplexed transports (e.g. QUIC), MConnection has now been moved below the `Transport` interface, as `MConnTransport`, and decoupled from the peer. Since the `p2p` package is not covered by our Go API stability, this is not considered a breaking change, and not listed in the changelog. The initial approach was to implement the new interface in its final form (which also involved possible protocol changes, see https://github.com/tendermint/spec/pull/227). However, it turned out that this would require a large amount of changes to existing P2P code because of the previous tight coupling between `Peer` and `MConnection` and the reliance on subtleties in the MConnection behavior. Instead, I have broadened the `Transport` interface to expose much of the existing MConnection interface, preserved much of the existing MConnection logic and behavior in the transport implementation, and tried to make as few changes to the rest of the P2P stack as possible. We will instead reduce this interface gradually as we refactor other parts of the P2P stack. The low-level transport code and protocol (e.g. MConnection, SecretConnection and so on) has not been significantly changed, and refactoring this is not a priority until we come up with a plan for QUIC adoption, as we may end up discarding the MConnection code entirely. There are no tests of the new `MConnTransport`, as this code is likely to evolve as we proceed with the P2P refactor, but tests should be added before a final release. The E2E tests are sufficient for basic validation in the meanwhile.
4 years ago
p2p: implement new Transport interface (#5791) This implements a new `Transport` interface and related types for the P2P refactor in #5670. Previously, `conn.MConnection` was very tightly coupled to the `Peer` implementation -- in order to allow alternative non-multiplexed transports (e.g. QUIC), MConnection has now been moved below the `Transport` interface, as `MConnTransport`, and decoupled from the peer. Since the `p2p` package is not covered by our Go API stability, this is not considered a breaking change, and not listed in the changelog. The initial approach was to implement the new interface in its final form (which also involved possible protocol changes, see https://github.com/tendermint/spec/pull/227). However, it turned out that this would require a large amount of changes to existing P2P code because of the previous tight coupling between `Peer` and `MConnection` and the reliance on subtleties in the MConnection behavior. Instead, I have broadened the `Transport` interface to expose much of the existing MConnection interface, preserved much of the existing MConnection logic and behavior in the transport implementation, and tried to make as few changes to the rest of the P2P stack as possible. We will instead reduce this interface gradually as we refactor other parts of the P2P stack. The low-level transport code and protocol (e.g. MConnection, SecretConnection and so on) has not been significantly changed, and refactoring this is not a priority until we come up with a plan for QUIC adoption, as we may end up discarding the MConnection code entirely. There are no tests of the new `MConnTransport`, as this code is likely to evolve as we proceed with the P2P refactor, but tests should be added before a final release. The E2E tests are sufficient for basic validation in the meanwhile.
4 years ago
  1. package conn
  2. import (
  3. "bufio"
  4. "context"
  5. "errors"
  6. "fmt"
  7. "io"
  8. "math"
  9. "net"
  10. "reflect"
  11. "runtime/debug"
  12. "sync"
  13. "sync/atomic"
  14. "time"
  15. "github.com/gogo/protobuf/proto"
  16. "github.com/tendermint/tendermint/internal/libs/flowrate"
  17. "github.com/tendermint/tendermint/internal/libs/protoio"
  18. "github.com/tendermint/tendermint/internal/libs/timer"
  19. "github.com/tendermint/tendermint/libs/log"
  20. tmmath "github.com/tendermint/tendermint/libs/math"
  21. "github.com/tendermint/tendermint/libs/service"
  22. tmp2p "github.com/tendermint/tendermint/proto/tendermint/p2p"
  23. )
  24. const (
  25. // mirrors MaxPacketMsgPayloadSize from config/config.go
  26. defaultMaxPacketMsgPayloadSize = 1400
  27. numBatchPacketMsgs = 10
  28. minReadBufferSize = 1024
  29. minWriteBufferSize = 65536
  30. updateStats = 2 * time.Second
  31. // some of these defaults are written in the user config
  32. // flushThrottle, sendRate, recvRate
  33. // TODO: remove values present in config
  34. defaultFlushThrottle = 100 * time.Millisecond
  35. defaultSendQueueCapacity = 1
  36. defaultRecvBufferCapacity = 4096
  37. defaultRecvMessageCapacity = 22020096 // 21MB
  38. defaultSendRate = int64(512000) // 500KB/s
  39. defaultRecvRate = int64(512000) // 500KB/s
  40. defaultSendTimeout = 10 * time.Second
  41. defaultPingInterval = 60 * time.Second
  42. defaultPongTimeout = 45 * time.Second
  43. )
  44. type receiveCbFunc func(ctx context.Context, chID ChannelID, msgBytes []byte)
  45. type errorCbFunc func(context.Context, interface{})
  46. /*
  47. Each peer has one `MConnection` (multiplex connection) instance.
  48. __multiplex__ *noun* a system or signal involving simultaneous transmission of
  49. several messages along a single channel of communication.
  50. Each `MConnection` handles message transmission on multiple abstract communication
  51. `Channel`s. Each channel has a globally unique byte id.
  52. The byte id and the relative priorities of each `Channel` are configured upon
  53. initialization of the connection.
  54. There are two methods for sending messages:
  55. func (m MConnection) Send(chID byte, msgBytes []byte) bool {}
  56. `Send(chID, msgBytes)` is a blocking call that waits until `msg` is
  57. successfully queued for the channel with the given id byte `chID`, or until the
  58. request times out. The message `msg` is serialized using Protobuf.
  59. Inbound message bytes are handled with an onReceive callback function.
  60. */
  61. type MConnection struct {
  62. service.BaseService
  63. logger log.Logger
  64. conn net.Conn
  65. bufConnReader *bufio.Reader
  66. bufConnWriter *bufio.Writer
  67. sendMonitor *flowrate.Monitor
  68. recvMonitor *flowrate.Monitor
  69. send chan struct{}
  70. pong chan struct{}
  71. channels []*channel
  72. channelsIdx map[ChannelID]*channel
  73. onReceive receiveCbFunc
  74. onError errorCbFunc
  75. errored uint32
  76. config MConnConfig
  77. // Closing quitSendRoutine will cause the sendRoutine to eventually quit.
  78. // doneSendRoutine is closed when the sendRoutine actually quits.
  79. quitSendRoutine chan struct{}
  80. doneSendRoutine chan struct{}
  81. // Closing quitRecvRouting will cause the recvRouting to eventually quit.
  82. quitRecvRoutine chan struct{}
  83. // used to ensure FlushStop and OnStop
  84. // are safe to call concurrently.
  85. stopMtx sync.Mutex
  86. cancel context.CancelFunc
  87. flushTimer *timer.ThrottleTimer // flush writes as necessary but throttled.
  88. pingTimer *time.Ticker // send pings periodically
  89. // close conn if pong is not received in pongTimeout
  90. pongTimer *time.Timer
  91. pongTimeoutCh chan bool // true - timeout, false - peer sent pong
  92. chStatsTimer *time.Ticker // update channel stats periodically
  93. created time.Time // time of creation
  94. _maxPacketMsgSize int
  95. }
  96. // MConnConfig is a MConnection configuration.
  97. type MConnConfig struct {
  98. SendRate int64 `mapstructure:"send_rate"`
  99. RecvRate int64 `mapstructure:"recv_rate"`
  100. // Maximum payload size
  101. MaxPacketMsgPayloadSize int `mapstructure:"max_packet_msg_payload_size"`
  102. // Interval to flush writes (throttled)
  103. FlushThrottle time.Duration `mapstructure:"flush_throttle"`
  104. // Interval to send pings
  105. PingInterval time.Duration `mapstructure:"ping_interval"`
  106. // Maximum wait time for pongs
  107. PongTimeout time.Duration `mapstructure:"pong_timeout"`
  108. // Process/Transport Start time
  109. StartTime time.Time `mapstructure:",omitempty"`
  110. }
  111. // DefaultMConnConfig returns the default config.
  112. func DefaultMConnConfig() MConnConfig {
  113. return MConnConfig{
  114. SendRate: defaultSendRate,
  115. RecvRate: defaultRecvRate,
  116. MaxPacketMsgPayloadSize: defaultMaxPacketMsgPayloadSize,
  117. FlushThrottle: defaultFlushThrottle,
  118. PingInterval: defaultPingInterval,
  119. PongTimeout: defaultPongTimeout,
  120. StartTime: time.Now(),
  121. }
  122. }
  123. // NewMConnection wraps net.Conn and creates multiplex connection with a config
  124. func NewMConnection(
  125. logger log.Logger,
  126. conn net.Conn,
  127. chDescs []*ChannelDescriptor,
  128. onReceive receiveCbFunc,
  129. onError errorCbFunc,
  130. config MConnConfig,
  131. ) *MConnection {
  132. if config.PongTimeout >= config.PingInterval {
  133. panic("pongTimeout must be less than pingInterval (otherwise, next ping will reset pong timer)")
  134. }
  135. mconn := &MConnection{
  136. logger: logger,
  137. conn: conn,
  138. bufConnReader: bufio.NewReaderSize(conn, minReadBufferSize),
  139. bufConnWriter: bufio.NewWriterSize(conn, minWriteBufferSize),
  140. sendMonitor: flowrate.New(config.StartTime, 0, 0),
  141. recvMonitor: flowrate.New(config.StartTime, 0, 0),
  142. send: make(chan struct{}, 1),
  143. pong: make(chan struct{}, 1),
  144. onReceive: onReceive,
  145. onError: onError,
  146. config: config,
  147. created: time.Now(),
  148. cancel: func() {},
  149. }
  150. mconn.BaseService = *service.NewBaseService(logger, "MConnection", mconn)
  151. // Create channels
  152. var channelsIdx = map[ChannelID]*channel{}
  153. var channels = []*channel{}
  154. for _, desc := range chDescs {
  155. channel := newChannel(mconn, *desc)
  156. channelsIdx[channel.desc.ID] = channel
  157. channels = append(channels, channel)
  158. }
  159. mconn.channels = channels
  160. mconn.channelsIdx = channelsIdx
  161. // maxPacketMsgSize() is a bit heavy, so call just once
  162. mconn._maxPacketMsgSize = mconn.maxPacketMsgSize()
  163. return mconn
  164. }
  165. // OnStart implements BaseService
  166. func (c *MConnection) OnStart(ctx context.Context) error {
  167. c.flushTimer = timer.NewThrottleTimer("flush", c.config.FlushThrottle)
  168. c.pingTimer = time.NewTicker(c.config.PingInterval)
  169. c.pongTimeoutCh = make(chan bool, 1)
  170. c.chStatsTimer = time.NewTicker(updateStats)
  171. c.quitSendRoutine = make(chan struct{})
  172. c.doneSendRoutine = make(chan struct{})
  173. c.quitRecvRoutine = make(chan struct{})
  174. go c.sendRoutine(ctx)
  175. go c.recvRoutine(ctx)
  176. return nil
  177. }
  178. // stopServices stops the BaseService and timers and closes the quitSendRoutine.
  179. // if the quitSendRoutine was already closed, it returns true, otherwise it returns false.
  180. // It uses the stopMtx to ensure only one of FlushStop and OnStop can do this at a time.
  181. func (c *MConnection) stopServices() (alreadyStopped bool) {
  182. c.stopMtx.Lock()
  183. defer c.stopMtx.Unlock()
  184. select {
  185. case <-c.quitSendRoutine:
  186. // already quit
  187. return true
  188. default:
  189. }
  190. select {
  191. case <-c.quitRecvRoutine:
  192. // already quit
  193. return true
  194. default:
  195. }
  196. c.flushTimer.Stop()
  197. c.pingTimer.Stop()
  198. c.chStatsTimer.Stop()
  199. // inform the recvRouting that we are shutting down
  200. close(c.quitRecvRoutine)
  201. close(c.quitSendRoutine)
  202. return false
  203. }
  204. // OnStop implements BaseService
  205. func (c *MConnection) OnStop() {
  206. if c.stopServices() {
  207. return
  208. }
  209. c.conn.Close()
  210. // We can't close pong safely here because
  211. // recvRoutine may write to it after we've stopped.
  212. // Though it doesn't need to get closed at all,
  213. // we close it @ recvRoutine.
  214. }
  215. func (c *MConnection) String() string {
  216. return fmt.Sprintf("MConn{%v}", c.conn.RemoteAddr())
  217. }
  218. func (c *MConnection) flush() {
  219. c.logger.Debug("Flush", "conn", c)
  220. err := c.bufConnWriter.Flush()
  221. if err != nil {
  222. c.logger.Debug("MConnection flush failed", "err", err)
  223. }
  224. }
  225. // Catch panics, usually caused by remote disconnects.
  226. func (c *MConnection) _recover(ctx context.Context) {
  227. if r := recover(); r != nil {
  228. c.logger.Error("MConnection panicked", "err", r, "stack", string(debug.Stack()))
  229. c.stopForError(ctx, fmt.Errorf("recovered from panic: %v", r))
  230. }
  231. }
  232. func (c *MConnection) stopForError(ctx context.Context, r interface{}) {
  233. c.Stop()
  234. if atomic.CompareAndSwapUint32(&c.errored, 0, 1) {
  235. if c.onError != nil {
  236. c.onError(ctx, r)
  237. }
  238. }
  239. }
  240. // Queues a message to be sent to channel.
  241. func (c *MConnection) Send(chID ChannelID, msgBytes []byte) bool {
  242. if !c.IsRunning() {
  243. return false
  244. }
  245. c.logger.Debug("Send", "channel", chID, "conn", c, "msgBytes", msgBytes)
  246. // Send message to channel.
  247. channel, ok := c.channelsIdx[chID]
  248. if !ok {
  249. c.logger.Error(fmt.Sprintf("Cannot send bytes, unknown channel %X", chID))
  250. return false
  251. }
  252. success := channel.sendBytes(msgBytes)
  253. if success {
  254. // Wake up sendRoutine if necessary
  255. select {
  256. case c.send <- struct{}{}:
  257. default:
  258. }
  259. } else {
  260. c.logger.Debug("Send failed", "channel", chID, "conn", c, "msgBytes", msgBytes)
  261. }
  262. return success
  263. }
  264. // sendRoutine polls for packets to send from channels.
  265. func (c *MConnection) sendRoutine(ctx context.Context) {
  266. defer c._recover(ctx)
  267. protoWriter := protoio.NewDelimitedWriter(c.bufConnWriter)
  268. FOR_LOOP:
  269. for {
  270. var _n int
  271. var err error
  272. SELECTION:
  273. select {
  274. case <-c.flushTimer.Ch:
  275. // NOTE: flushTimer.Set() must be called every time
  276. // something is written to .bufConnWriter.
  277. c.flush()
  278. case <-c.chStatsTimer.C:
  279. for _, channel := range c.channels {
  280. channel.updateStats()
  281. }
  282. case <-c.pingTimer.C:
  283. _n, err = protoWriter.WriteMsg(mustWrapPacket(&tmp2p.PacketPing{}))
  284. if err != nil {
  285. c.logger.Error("Failed to send PacketPing", "err", err)
  286. break SELECTION
  287. }
  288. c.sendMonitor.Update(_n)
  289. c.logger.Debug("Starting pong timer", "dur", c.config.PongTimeout)
  290. c.pongTimer = time.AfterFunc(c.config.PongTimeout, func() {
  291. select {
  292. case c.pongTimeoutCh <- true:
  293. default:
  294. }
  295. })
  296. c.flush()
  297. case timeout := <-c.pongTimeoutCh:
  298. if timeout {
  299. err = errors.New("pong timeout")
  300. } else {
  301. c.stopPongTimer()
  302. }
  303. case <-c.pong:
  304. _n, err = protoWriter.WriteMsg(mustWrapPacket(&tmp2p.PacketPong{}))
  305. if err != nil {
  306. c.logger.Error("Failed to send PacketPong", "err", err)
  307. break SELECTION
  308. }
  309. c.sendMonitor.Update(_n)
  310. c.flush()
  311. case <-ctx.Done():
  312. break FOR_LOOP
  313. case <-c.quitSendRoutine:
  314. break FOR_LOOP
  315. case <-c.send:
  316. // Send some PacketMsgs
  317. eof := c.sendSomePacketMsgs(ctx)
  318. if !eof {
  319. // Keep sendRoutine awake.
  320. select {
  321. case c.send <- struct{}{}:
  322. default:
  323. }
  324. }
  325. }
  326. if !c.IsRunning() {
  327. break FOR_LOOP
  328. }
  329. if err != nil {
  330. c.logger.Error("Connection failed @ sendRoutine", "conn", c, "err", err)
  331. c.stopForError(ctx, err)
  332. break FOR_LOOP
  333. }
  334. }
  335. // Cleanup
  336. c.stopPongTimer()
  337. close(c.doneSendRoutine)
  338. }
  339. // Returns true if messages from channels were exhausted.
  340. // Blocks in accordance to .sendMonitor throttling.
  341. func (c *MConnection) sendSomePacketMsgs(ctx context.Context) bool {
  342. // Block until .sendMonitor says we can write.
  343. // Once we're ready we send more than we asked for,
  344. // but amortized it should even out.
  345. c.sendMonitor.Limit(c._maxPacketMsgSize, atomic.LoadInt64(&c.config.SendRate), true)
  346. // Now send some PacketMsgs.
  347. for i := 0; i < numBatchPacketMsgs; i++ {
  348. if c.sendPacketMsg(ctx) {
  349. return true
  350. }
  351. }
  352. return false
  353. }
  354. // Returns true if messages from channels were exhausted.
  355. func (c *MConnection) sendPacketMsg(ctx context.Context) bool {
  356. // Choose a channel to create a PacketMsg from.
  357. // The chosen channel will be the one whose recentlySent/priority is the least.
  358. var leastRatio float32 = math.MaxFloat32
  359. var leastChannel *channel
  360. for _, channel := range c.channels {
  361. // If nothing to send, skip this channel
  362. if !channel.isSendPending() {
  363. continue
  364. }
  365. // Get ratio, and keep track of lowest ratio.
  366. ratio := float32(channel.recentlySent) / float32(channel.desc.Priority)
  367. if ratio < leastRatio {
  368. leastRatio = ratio
  369. leastChannel = channel
  370. }
  371. }
  372. // Nothing to send?
  373. if leastChannel == nil {
  374. return true
  375. }
  376. // c.logger.Info("Found a msgPacket to send")
  377. // Make & send a PacketMsg from this channel
  378. _n, err := leastChannel.writePacketMsgTo(c.bufConnWriter)
  379. if err != nil {
  380. c.logger.Error("Failed to write PacketMsg", "err", err)
  381. c.stopForError(ctx, err)
  382. return true
  383. }
  384. c.sendMonitor.Update(_n)
  385. c.flushTimer.Set()
  386. return false
  387. }
  388. // recvRoutine reads PacketMsgs and reconstructs the message using the channels' "recving" buffer.
  389. // After a whole message has been assembled, it's pushed to onReceive().
  390. // Blocks depending on how the connection is throttled.
  391. // Otherwise, it never blocks.
  392. func (c *MConnection) recvRoutine(ctx context.Context) {
  393. defer c._recover(ctx)
  394. protoReader := protoio.NewDelimitedReader(c.bufConnReader, c._maxPacketMsgSize)
  395. FOR_LOOP:
  396. for {
  397. // Block until .recvMonitor says we can read.
  398. c.recvMonitor.Limit(c._maxPacketMsgSize, atomic.LoadInt64(&c.config.RecvRate), true)
  399. // Peek into bufConnReader for debugging
  400. /*
  401. if numBytes := c.bufConnReader.Buffered(); numBytes > 0 {
  402. bz, err := c.bufConnReader.Peek(tmmath.MinInt(numBytes, 100))
  403. if err == nil {
  404. // return
  405. } else {
  406. c.logger.Debug("error peeking connection buffer", "err", err)
  407. // return nil
  408. }
  409. c.logger.Info("Peek connection buffer", "numBytes", numBytes, "bz", bz)
  410. }
  411. */
  412. // Read packet type
  413. var packet tmp2p.Packet
  414. _n, err := protoReader.ReadMsg(&packet)
  415. c.recvMonitor.Update(_n)
  416. if err != nil {
  417. // stopServices was invoked and we are shutting down
  418. // receiving is excpected to fail since we will close the connection
  419. select {
  420. case <-ctx.Done():
  421. case <-c.quitRecvRoutine:
  422. break FOR_LOOP
  423. default:
  424. }
  425. if c.IsRunning() {
  426. if err == io.EOF {
  427. c.logger.Info("Connection is closed @ recvRoutine (likely by the other side)", "conn", c)
  428. } else {
  429. c.logger.Debug("Connection failed @ recvRoutine (reading byte)", "conn", c, "err", err)
  430. }
  431. c.stopForError(ctx, err)
  432. }
  433. break FOR_LOOP
  434. }
  435. // Read more depending on packet type.
  436. switch pkt := packet.Sum.(type) {
  437. case *tmp2p.Packet_PacketPing:
  438. // TODO: prevent abuse, as they cause flush()'s.
  439. // https://github.com/tendermint/tendermint/issues/1190
  440. select {
  441. case c.pong <- struct{}{}:
  442. default:
  443. // never block
  444. }
  445. case *tmp2p.Packet_PacketPong:
  446. select {
  447. case c.pongTimeoutCh <- false:
  448. default:
  449. // never block
  450. }
  451. case *tmp2p.Packet_PacketMsg:
  452. channelID := ChannelID(pkt.PacketMsg.ChannelID)
  453. channel, ok := c.channelsIdx[channelID]
  454. if pkt.PacketMsg.ChannelID < 0 || pkt.PacketMsg.ChannelID > math.MaxUint8 || !ok || channel == nil {
  455. err := fmt.Errorf("unknown channel %X", pkt.PacketMsg.ChannelID)
  456. c.logger.Debug("Connection failed @ recvRoutine", "conn", c, "err", err)
  457. c.stopForError(ctx, err)
  458. break FOR_LOOP
  459. }
  460. msgBytes, err := channel.recvPacketMsg(*pkt.PacketMsg)
  461. if err != nil {
  462. if c.IsRunning() {
  463. c.logger.Debug("Connection failed @ recvRoutine", "conn", c, "err", err)
  464. c.stopForError(ctx, err)
  465. }
  466. break FOR_LOOP
  467. }
  468. if msgBytes != nil {
  469. c.logger.Debug("Received bytes", "chID", channelID, "msgBytes", msgBytes)
  470. // NOTE: This means the reactor.Receive runs in the same thread as the p2p recv routine
  471. c.onReceive(ctx, channelID, msgBytes)
  472. }
  473. default:
  474. err := fmt.Errorf("unknown message type %v", reflect.TypeOf(packet))
  475. c.logger.Error("Connection failed @ recvRoutine", "conn", c, "err", err)
  476. c.stopForError(ctx, err)
  477. break FOR_LOOP
  478. }
  479. }
  480. // Cleanup
  481. close(c.pong)
  482. for range c.pong {
  483. // Drain
  484. }
  485. }
  486. // not goroutine-safe
  487. func (c *MConnection) stopPongTimer() {
  488. if c.pongTimer != nil {
  489. _ = c.pongTimer.Stop()
  490. c.pongTimer = nil
  491. }
  492. }
  493. // maxPacketMsgSize returns a maximum size of PacketMsg
  494. func (c *MConnection) maxPacketMsgSize() int {
  495. bz, err := proto.Marshal(mustWrapPacket(&tmp2p.PacketMsg{
  496. ChannelID: 0x01,
  497. EOF: true,
  498. Data: make([]byte, c.config.MaxPacketMsgPayloadSize),
  499. }))
  500. if err != nil {
  501. panic(err)
  502. }
  503. return len(bz)
  504. }
  505. type ChannelStatus struct {
  506. ID byte
  507. SendQueueCapacity int
  508. SendQueueSize int
  509. Priority int
  510. RecentlySent int64
  511. }
  512. //-----------------------------------------------------------------------------
  513. // ChannelID is an arbitrary channel ID.
  514. type ChannelID uint16
  515. type ChannelDescriptor struct {
  516. ID ChannelID
  517. Priority int
  518. MessageType proto.Message
  519. // TODO: Remove once p2p refactor is complete.
  520. SendQueueCapacity int
  521. RecvMessageCapacity int
  522. // RecvBufferCapacity defines the max buffer size of inbound messages for a
  523. // given p2p Channel queue.
  524. RecvBufferCapacity int
  525. }
  526. func (chDesc ChannelDescriptor) FillDefaults() (filled ChannelDescriptor) {
  527. if chDesc.SendQueueCapacity == 0 {
  528. chDesc.SendQueueCapacity = defaultSendQueueCapacity
  529. }
  530. if chDesc.RecvBufferCapacity == 0 {
  531. chDesc.RecvBufferCapacity = defaultRecvBufferCapacity
  532. }
  533. if chDesc.RecvMessageCapacity == 0 {
  534. chDesc.RecvMessageCapacity = defaultRecvMessageCapacity
  535. }
  536. filled = chDesc
  537. return
  538. }
  539. // NOTE: not goroutine-safe.
  540. type channel struct {
  541. // Exponential moving average.
  542. // This field must be accessed atomically.
  543. // It is first in the struct to ensure correct alignment.
  544. // See https://github.com/tendermint/tendermint/issues/7000.
  545. recentlySent int64
  546. conn *MConnection
  547. desc ChannelDescriptor
  548. sendQueue chan []byte
  549. sendQueueSize int32 // atomic.
  550. recving []byte
  551. sending []byte
  552. maxPacketMsgPayloadSize int
  553. logger log.Logger
  554. }
  555. func newChannel(conn *MConnection, desc ChannelDescriptor) *channel {
  556. desc = desc.FillDefaults()
  557. if desc.Priority <= 0 {
  558. panic("Channel default priority must be a positive integer")
  559. }
  560. return &channel{
  561. conn: conn,
  562. desc: desc,
  563. sendQueue: make(chan []byte, desc.SendQueueCapacity),
  564. recving: make([]byte, 0, desc.RecvBufferCapacity),
  565. maxPacketMsgPayloadSize: conn.config.MaxPacketMsgPayloadSize,
  566. logger: conn.logger,
  567. }
  568. }
  569. // Queues message to send to this channel.
  570. // Goroutine-safe
  571. // Times out (and returns false) after defaultSendTimeout
  572. func (ch *channel) sendBytes(bytes []byte) bool {
  573. select {
  574. case ch.sendQueue <- bytes:
  575. atomic.AddInt32(&ch.sendQueueSize, 1)
  576. return true
  577. case <-time.After(defaultSendTimeout):
  578. return false
  579. }
  580. }
  581. // Returns true if any PacketMsgs are pending to be sent.
  582. // Call before calling nextPacketMsg()
  583. // Goroutine-safe
  584. func (ch *channel) isSendPending() bool {
  585. if len(ch.sending) == 0 {
  586. if len(ch.sendQueue) == 0 {
  587. return false
  588. }
  589. ch.sending = <-ch.sendQueue
  590. }
  591. return true
  592. }
  593. // Creates a new PacketMsg to send.
  594. // Not goroutine-safe
  595. func (ch *channel) nextPacketMsg() tmp2p.PacketMsg {
  596. packet := tmp2p.PacketMsg{ChannelID: int32(ch.desc.ID)}
  597. maxSize := ch.maxPacketMsgPayloadSize
  598. packet.Data = ch.sending[:tmmath.MinInt(maxSize, len(ch.sending))]
  599. if len(ch.sending) <= maxSize {
  600. packet.EOF = true
  601. ch.sending = nil
  602. atomic.AddInt32(&ch.sendQueueSize, -1) // decrement sendQueueSize
  603. } else {
  604. packet.EOF = false
  605. ch.sending = ch.sending[tmmath.MinInt(maxSize, len(ch.sending)):]
  606. }
  607. return packet
  608. }
  609. // Writes next PacketMsg to w and updates c.recentlySent.
  610. // Not goroutine-safe
  611. func (ch *channel) writePacketMsgTo(w io.Writer) (n int, err error) {
  612. packet := ch.nextPacketMsg()
  613. n, err = protoio.NewDelimitedWriter(w).WriteMsg(mustWrapPacket(&packet))
  614. atomic.AddInt64(&ch.recentlySent, int64(n))
  615. return
  616. }
  617. // Handles incoming PacketMsgs. It returns a message bytes if message is
  618. // complete, which is owned by the caller and will not be modified.
  619. // Not goroutine-safe
  620. func (ch *channel) recvPacketMsg(packet tmp2p.PacketMsg) ([]byte, error) {
  621. ch.logger.Debug("Read PacketMsg", "conn", ch.conn, "packet", packet)
  622. var recvCap, recvReceived = ch.desc.RecvMessageCapacity, len(ch.recving) + len(packet.Data)
  623. if recvCap < recvReceived {
  624. return nil, fmt.Errorf("received message exceeds available capacity: %v < %v", recvCap, recvReceived)
  625. }
  626. ch.recving = append(ch.recving, packet.Data...)
  627. if packet.EOF {
  628. msgBytes := ch.recving
  629. ch.recving = make([]byte, 0, ch.desc.RecvBufferCapacity)
  630. return msgBytes, nil
  631. }
  632. return nil, nil
  633. }
  634. // Call this periodically to update stats for throttling purposes.
  635. // Not goroutine-safe
  636. func (ch *channel) updateStats() {
  637. // Exponential decay of stats.
  638. // TODO: optimize.
  639. atomic.StoreInt64(&ch.recentlySent, int64(float64(atomic.LoadInt64(&ch.recentlySent))*0.8))
  640. }
  641. //----------------------------------------
  642. // Packet
  643. // mustWrapPacket takes a packet kind (oneof) and wraps it in a tmp2p.Packet message.
  644. func mustWrapPacket(pb proto.Message) *tmp2p.Packet {
  645. var msg tmp2p.Packet
  646. switch pb := pb.(type) {
  647. case *tmp2p.Packet: // already a packet
  648. msg = *pb
  649. case *tmp2p.PacketPing:
  650. msg = tmp2p.Packet{
  651. Sum: &tmp2p.Packet_PacketPing{
  652. PacketPing: pb,
  653. },
  654. }
  655. case *tmp2p.PacketPong:
  656. msg = tmp2p.Packet{
  657. Sum: &tmp2p.Packet_PacketPong{
  658. PacketPong: pb,
  659. },
  660. }
  661. case *tmp2p.PacketMsg:
  662. msg = tmp2p.Packet{
  663. Sum: &tmp2p.Packet_PacketMsg{
  664. PacketMsg: pb,
  665. },
  666. }
  667. default:
  668. panic(fmt.Errorf("unknown packet type %T", pb))
  669. }
  670. return &msg
  671. }