You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

910 lines
24 KiB

p2p: implement new Transport interface (#5791) This implements a new `Transport` interface and related types for the P2P refactor in #5670. Previously, `conn.MConnection` was very tightly coupled to the `Peer` implementation -- in order to allow alternative non-multiplexed transports (e.g. QUIC), MConnection has now been moved below the `Transport` interface, as `MConnTransport`, and decoupled from the peer. Since the `p2p` package is not covered by our Go API stability, this is not considered a breaking change, and not listed in the changelog. The initial approach was to implement the new interface in its final form (which also involved possible protocol changes, see https://github.com/tendermint/spec/pull/227). However, it turned out that this would require a large amount of changes to existing P2P code because of the previous tight coupling between `Peer` and `MConnection` and the reliance on subtleties in the MConnection behavior. Instead, I have broadened the `Transport` interface to expose much of the existing MConnection interface, preserved much of the existing MConnection logic and behavior in the transport implementation, and tried to make as few changes to the rest of the P2P stack as possible. We will instead reduce this interface gradually as we refactor other parts of the P2P stack. The low-level transport code and protocol (e.g. MConnection, SecretConnection and so on) has not been significantly changed, and refactoring this is not a priority until we come up with a plan for QUIC adoption, as we may end up discarding the MConnection code entirely. There are no tests of the new `MConnTransport`, as this code is likely to evolve as we proceed with the P2P refactor, but tests should be added before a final release. The E2E tests are sufficient for basic validation in the meanwhile.
4 years ago
p2p: implement new Transport interface (#5791) This implements a new `Transport` interface and related types for the P2P refactor in #5670. Previously, `conn.MConnection` was very tightly coupled to the `Peer` implementation -- in order to allow alternative non-multiplexed transports (e.g. QUIC), MConnection has now been moved below the `Transport` interface, as `MConnTransport`, and decoupled from the peer. Since the `p2p` package is not covered by our Go API stability, this is not considered a breaking change, and not listed in the changelog. The initial approach was to implement the new interface in its final form (which also involved possible protocol changes, see https://github.com/tendermint/spec/pull/227). However, it turned out that this would require a large amount of changes to existing P2P code because of the previous tight coupling between `Peer` and `MConnection` and the reliance on subtleties in the MConnection behavior. Instead, I have broadened the `Transport` interface to expose much of the existing MConnection interface, preserved much of the existing MConnection logic and behavior in the transport implementation, and tried to make as few changes to the rest of the P2P stack as possible. We will instead reduce this interface gradually as we refactor other parts of the P2P stack. The low-level transport code and protocol (e.g. MConnection, SecretConnection and so on) has not been significantly changed, and refactoring this is not a priority until we come up with a plan for QUIC adoption, as we may end up discarding the MConnection code entirely. There are no tests of the new `MConnTransport`, as this code is likely to evolve as we proceed with the P2P refactor, but tests should be added before a final release. The E2E tests are sufficient for basic validation in the meanwhile.
4 years ago
  1. package conn
  2. import (
  3. "bufio"
  4. "errors"
  5. "fmt"
  6. "io"
  7. "math"
  8. "net"
  9. "reflect"
  10. "runtime/debug"
  11. "sync/atomic"
  12. "time"
  13. "github.com/gogo/protobuf/proto"
  14. flow "github.com/tendermint/tendermint/libs/flowrate"
  15. "github.com/tendermint/tendermint/libs/log"
  16. tmmath "github.com/tendermint/tendermint/libs/math"
  17. "github.com/tendermint/tendermint/libs/protoio"
  18. "github.com/tendermint/tendermint/libs/service"
  19. tmsync "github.com/tendermint/tendermint/libs/sync"
  20. "github.com/tendermint/tendermint/libs/timer"
  21. tmp2p "github.com/tendermint/tendermint/proto/tendermint/p2p"
  22. )
  23. const (
  24. // mirrors MaxPacketMsgPayloadSize from config/config.go
  25. defaultMaxPacketMsgPayloadSize = 1400
  26. numBatchPacketMsgs = 10
  27. minReadBufferSize = 1024
  28. minWriteBufferSize = 65536
  29. updateStats = 2 * time.Second
  30. // some of these defaults are written in the user config
  31. // flushThrottle, sendRate, recvRate
  32. // TODO: remove values present in config
  33. defaultFlushThrottle = 100 * time.Millisecond
  34. defaultSendQueueCapacity = 1
  35. defaultRecvBufferCapacity = 4096
  36. defaultRecvMessageCapacity = 22020096 // 21MB
  37. defaultSendRate = int64(512000) // 500KB/s
  38. defaultRecvRate = int64(512000) // 500KB/s
  39. defaultSendTimeout = 10 * time.Second
  40. defaultPingInterval = 60 * time.Second
  41. defaultPongTimeout = 45 * time.Second
  42. )
  43. type receiveCbFunc func(chID byte, msgBytes []byte)
  44. type errorCbFunc func(interface{})
  45. /*
  46. Each peer has one `MConnection` (multiplex connection) instance.
  47. __multiplex__ *noun* a system or signal involving simultaneous transmission of
  48. several messages along a single channel of communication.
  49. Each `MConnection` handles message transmission on multiple abstract communication
  50. `Channel`s. Each channel has a globally unique byte id.
  51. The byte id and the relative priorities of each `Channel` are configured upon
  52. initialization of the connection.
  53. There are two methods for sending messages:
  54. func (m MConnection) Send(chID byte, msgBytes []byte) bool {}
  55. func (m MConnection) TrySend(chID byte, msgBytes []byte}) bool {}
  56. `Send(chID, msgBytes)` is a blocking call that waits until `msg` is
  57. successfully queued for the channel with the given id byte `chID`, or until the
  58. request times out. The message `msg` is serialized using Protobuf.
  59. `TrySend(chID, msgBytes)` is a nonblocking call that returns false if the
  60. channel's queue is full.
  61. Inbound message bytes are handled with an onReceive callback function.
  62. */
  63. type MConnection struct {
  64. service.BaseService
  65. conn net.Conn
  66. bufConnReader *bufio.Reader
  67. bufConnWriter *bufio.Writer
  68. sendMonitor *flow.Monitor
  69. recvMonitor *flow.Monitor
  70. send chan struct{}
  71. pong chan struct{}
  72. channels []*Channel
  73. channelsIdx map[byte]*Channel
  74. onReceive receiveCbFunc
  75. onError errorCbFunc
  76. errored uint32
  77. config MConnConfig
  78. // Closing quitSendRoutine will cause the sendRoutine to eventually quit.
  79. // doneSendRoutine is closed when the sendRoutine actually quits.
  80. quitSendRoutine chan struct{}
  81. doneSendRoutine chan struct{}
  82. // Closing quitRecvRouting will cause the recvRouting to eventually quit.
  83. quitRecvRoutine chan struct{}
  84. // used to ensure FlushStop and OnStop
  85. // are safe to call concurrently.
  86. stopMtx tmsync.Mutex
  87. flushTimer *timer.ThrottleTimer // flush writes as necessary but throttled.
  88. pingTimer *time.Ticker // send pings periodically
  89. // close conn if pong is not received in pongTimeout
  90. pongTimer *time.Timer
  91. pongTimeoutCh chan bool // true - timeout, false - peer sent pong
  92. chStatsTimer *time.Ticker // update channel stats periodically
  93. created time.Time // time of creation
  94. _maxPacketMsgSize int
  95. }
  96. // MConnConfig is a MConnection configuration.
  97. type MConnConfig struct {
  98. SendRate int64 `mapstructure:"send_rate"`
  99. RecvRate int64 `mapstructure:"recv_rate"`
  100. // Maximum payload size
  101. MaxPacketMsgPayloadSize int `mapstructure:"max_packet_msg_payload_size"`
  102. // Interval to flush writes (throttled)
  103. FlushThrottle time.Duration `mapstructure:"flush_throttle"`
  104. // Interval to send pings
  105. PingInterval time.Duration `mapstructure:"ping_interval"`
  106. // Maximum wait time for pongs
  107. PongTimeout time.Duration `mapstructure:"pong_timeout"`
  108. }
  109. // DefaultMConnConfig returns the default config.
  110. func DefaultMConnConfig() MConnConfig {
  111. return MConnConfig{
  112. SendRate: defaultSendRate,
  113. RecvRate: defaultRecvRate,
  114. MaxPacketMsgPayloadSize: defaultMaxPacketMsgPayloadSize,
  115. FlushThrottle: defaultFlushThrottle,
  116. PingInterval: defaultPingInterval,
  117. PongTimeout: defaultPongTimeout,
  118. }
  119. }
  120. // NewMConnection wraps net.Conn and creates multiplex connection
  121. func NewMConnection(
  122. conn net.Conn,
  123. chDescs []*ChannelDescriptor,
  124. onReceive receiveCbFunc,
  125. onError errorCbFunc,
  126. ) *MConnection {
  127. return NewMConnectionWithConfig(
  128. conn,
  129. chDescs,
  130. onReceive,
  131. onError,
  132. DefaultMConnConfig())
  133. }
  134. // NewMConnectionWithConfig wraps net.Conn and creates multiplex connection with a config
  135. func NewMConnectionWithConfig(
  136. conn net.Conn,
  137. chDescs []*ChannelDescriptor,
  138. onReceive receiveCbFunc,
  139. onError errorCbFunc,
  140. config MConnConfig,
  141. ) *MConnection {
  142. if config.PongTimeout >= config.PingInterval {
  143. panic("pongTimeout must be less than pingInterval (otherwise, next ping will reset pong timer)")
  144. }
  145. mconn := &MConnection{
  146. conn: conn,
  147. bufConnReader: bufio.NewReaderSize(conn, minReadBufferSize),
  148. bufConnWriter: bufio.NewWriterSize(conn, minWriteBufferSize),
  149. sendMonitor: flow.New(0, 0),
  150. recvMonitor: flow.New(0, 0),
  151. send: make(chan struct{}, 1),
  152. pong: make(chan struct{}, 1),
  153. onReceive: onReceive,
  154. onError: onError,
  155. config: config,
  156. created: time.Now(),
  157. }
  158. // Create channels
  159. var channelsIdx = map[byte]*Channel{}
  160. var channels = []*Channel{}
  161. for _, desc := range chDescs {
  162. channel := newChannel(mconn, *desc)
  163. channelsIdx[channel.desc.ID] = channel
  164. channels = append(channels, channel)
  165. }
  166. mconn.channels = channels
  167. mconn.channelsIdx = channelsIdx
  168. mconn.BaseService = *service.NewBaseService(nil, "MConnection", mconn)
  169. // maxPacketMsgSize() is a bit heavy, so call just once
  170. mconn._maxPacketMsgSize = mconn.maxPacketMsgSize()
  171. return mconn
  172. }
  173. func (c *MConnection) SetLogger(l log.Logger) {
  174. c.BaseService.SetLogger(l)
  175. for _, ch := range c.channels {
  176. ch.SetLogger(l)
  177. }
  178. }
  179. // OnStart implements BaseService
  180. func (c *MConnection) OnStart() error {
  181. if err := c.BaseService.OnStart(); err != nil {
  182. return err
  183. }
  184. c.flushTimer = timer.NewThrottleTimer("flush", c.config.FlushThrottle)
  185. c.pingTimer = time.NewTicker(c.config.PingInterval)
  186. c.pongTimeoutCh = make(chan bool, 1)
  187. c.chStatsTimer = time.NewTicker(updateStats)
  188. c.quitSendRoutine = make(chan struct{})
  189. c.doneSendRoutine = make(chan struct{})
  190. c.quitRecvRoutine = make(chan struct{})
  191. go c.sendRoutine()
  192. go c.recvRoutine()
  193. return nil
  194. }
  195. // stopServices stops the BaseService and timers and closes the quitSendRoutine.
  196. // if the quitSendRoutine was already closed, it returns true, otherwise it returns false.
  197. // It uses the stopMtx to ensure only one of FlushStop and OnStop can do this at a time.
  198. func (c *MConnection) stopServices() (alreadyStopped bool) {
  199. c.stopMtx.Lock()
  200. defer c.stopMtx.Unlock()
  201. select {
  202. case <-c.quitSendRoutine:
  203. // already quit
  204. return true
  205. default:
  206. }
  207. select {
  208. case <-c.quitRecvRoutine:
  209. // already quit
  210. return true
  211. default:
  212. }
  213. c.BaseService.OnStop()
  214. c.flushTimer.Stop()
  215. c.pingTimer.Stop()
  216. c.chStatsTimer.Stop()
  217. // inform the recvRouting that we are shutting down
  218. close(c.quitRecvRoutine)
  219. close(c.quitSendRoutine)
  220. return false
  221. }
  222. // FlushStop replicates the logic of OnStop.
  223. // It additionally ensures that all successful
  224. // .Send() calls will get flushed before closing
  225. // the connection.
  226. func (c *MConnection) FlushStop() {
  227. if c.stopServices() {
  228. return
  229. }
  230. // this block is unique to FlushStop
  231. {
  232. // wait until the sendRoutine exits
  233. // so we dont race on calling sendSomePacketMsgs
  234. <-c.doneSendRoutine
  235. // Send and flush all pending msgs.
  236. // Since sendRoutine has exited, we can call this
  237. // safely
  238. eof := c.sendSomePacketMsgs()
  239. for !eof {
  240. eof = c.sendSomePacketMsgs()
  241. }
  242. c.flush()
  243. // Now we can close the connection
  244. }
  245. c.conn.Close()
  246. // We can't close pong safely here because
  247. // recvRoutine may write to it after we've stopped.
  248. // Though it doesn't need to get closed at all,
  249. // we close it @ recvRoutine.
  250. // c.Stop()
  251. }
  252. // OnStop implements BaseService
  253. func (c *MConnection) OnStop() {
  254. if c.stopServices() {
  255. return
  256. }
  257. c.conn.Close()
  258. // We can't close pong safely here because
  259. // recvRoutine may write to it after we've stopped.
  260. // Though it doesn't need to get closed at all,
  261. // we close it @ recvRoutine.
  262. }
  263. func (c *MConnection) String() string {
  264. return fmt.Sprintf("MConn{%v}", c.conn.RemoteAddr())
  265. }
  266. func (c *MConnection) flush() {
  267. c.Logger.Debug("Flush", "conn", c)
  268. err := c.bufConnWriter.Flush()
  269. if err != nil {
  270. c.Logger.Debug("MConnection flush failed", "err", err)
  271. }
  272. }
  273. // Catch panics, usually caused by remote disconnects.
  274. func (c *MConnection) _recover() {
  275. if r := recover(); r != nil {
  276. c.Logger.Error("MConnection panicked", "err", r, "stack", string(debug.Stack()))
  277. c.stopForError(fmt.Errorf("recovered from panic: %v", r))
  278. }
  279. }
  280. func (c *MConnection) stopForError(r interface{}) {
  281. if err := c.Stop(); err != nil {
  282. c.Logger.Error("Error stopping connection", "err", err)
  283. }
  284. if atomic.CompareAndSwapUint32(&c.errored, 0, 1) {
  285. if c.onError != nil {
  286. c.onError(r)
  287. }
  288. }
  289. }
  290. // Queues a message to be sent to channel.
  291. func (c *MConnection) Send(chID byte, msgBytes []byte) bool {
  292. if !c.IsRunning() {
  293. return false
  294. }
  295. c.Logger.Debug("Send", "channel", chID, "conn", c, "msgBytes", msgBytes)
  296. // Send message to channel.
  297. channel, ok := c.channelsIdx[chID]
  298. if !ok {
  299. c.Logger.Error(fmt.Sprintf("Cannot send bytes, unknown channel %X", chID))
  300. return false
  301. }
  302. success := channel.sendBytes(msgBytes)
  303. if success {
  304. // Wake up sendRoutine if necessary
  305. select {
  306. case c.send <- struct{}{}:
  307. default:
  308. }
  309. } else {
  310. c.Logger.Debug("Send failed", "channel", chID, "conn", c, "msgBytes", msgBytes)
  311. }
  312. return success
  313. }
  314. // Queues a message to be sent to channel.
  315. // Nonblocking, returns true if successful.
  316. func (c *MConnection) TrySend(chID byte, msgBytes []byte) bool {
  317. if !c.IsRunning() {
  318. return false
  319. }
  320. c.Logger.Debug("TrySend", "channel", chID, "conn", c, "msgBytes", msgBytes)
  321. // Send message to channel.
  322. channel, ok := c.channelsIdx[chID]
  323. if !ok {
  324. c.Logger.Error(fmt.Sprintf("Cannot send bytes, unknown channel %X", chID))
  325. return false
  326. }
  327. ok = channel.trySendBytes(msgBytes)
  328. if ok {
  329. // Wake up sendRoutine if necessary
  330. select {
  331. case c.send <- struct{}{}:
  332. default:
  333. }
  334. }
  335. return ok
  336. }
  337. // CanSend returns true if you can send more data onto the chID, false
  338. // otherwise. Use only as a heuristic.
  339. func (c *MConnection) CanSend(chID byte) bool {
  340. if !c.IsRunning() {
  341. return false
  342. }
  343. channel, ok := c.channelsIdx[chID]
  344. if !ok {
  345. c.Logger.Error(fmt.Sprintf("Unknown channel %X", chID))
  346. return false
  347. }
  348. return channel.canSend()
  349. }
  350. // sendRoutine polls for packets to send from channels.
  351. func (c *MConnection) sendRoutine() {
  352. defer c._recover()
  353. protoWriter := protoio.NewDelimitedWriter(c.bufConnWriter)
  354. FOR_LOOP:
  355. for {
  356. var _n int
  357. var err error
  358. SELECTION:
  359. select {
  360. case <-c.flushTimer.Ch:
  361. // NOTE: flushTimer.Set() must be called every time
  362. // something is written to .bufConnWriter.
  363. c.flush()
  364. case <-c.chStatsTimer.C:
  365. for _, channel := range c.channels {
  366. channel.updateStats()
  367. }
  368. case <-c.pingTimer.C:
  369. c.Logger.Debug("Send Ping")
  370. _n, err = protoWriter.WriteMsg(mustWrapPacket(&tmp2p.PacketPing{}))
  371. if err != nil {
  372. c.Logger.Error("Failed to send PacketPing", "err", err)
  373. break SELECTION
  374. }
  375. c.sendMonitor.Update(_n)
  376. c.Logger.Debug("Starting pong timer", "dur", c.config.PongTimeout)
  377. c.pongTimer = time.AfterFunc(c.config.PongTimeout, func() {
  378. select {
  379. case c.pongTimeoutCh <- true:
  380. default:
  381. }
  382. })
  383. c.flush()
  384. case timeout := <-c.pongTimeoutCh:
  385. if timeout {
  386. c.Logger.Debug("Pong timeout")
  387. err = errors.New("pong timeout")
  388. } else {
  389. c.stopPongTimer()
  390. }
  391. case <-c.pong:
  392. c.Logger.Debug("Send Pong")
  393. _n, err = protoWriter.WriteMsg(mustWrapPacket(&tmp2p.PacketPong{}))
  394. if err != nil {
  395. c.Logger.Error("Failed to send PacketPong", "err", err)
  396. break SELECTION
  397. }
  398. c.sendMonitor.Update(_n)
  399. c.flush()
  400. case <-c.quitSendRoutine:
  401. break FOR_LOOP
  402. case <-c.send:
  403. // Send some PacketMsgs
  404. eof := c.sendSomePacketMsgs()
  405. if !eof {
  406. // Keep sendRoutine awake.
  407. select {
  408. case c.send <- struct{}{}:
  409. default:
  410. }
  411. }
  412. }
  413. if !c.IsRunning() {
  414. break FOR_LOOP
  415. }
  416. if err != nil {
  417. c.Logger.Error("Connection failed @ sendRoutine", "conn", c, "err", err)
  418. c.stopForError(err)
  419. break FOR_LOOP
  420. }
  421. }
  422. // Cleanup
  423. c.stopPongTimer()
  424. close(c.doneSendRoutine)
  425. }
  426. // Returns true if messages from channels were exhausted.
  427. // Blocks in accordance to .sendMonitor throttling.
  428. func (c *MConnection) sendSomePacketMsgs() bool {
  429. // Block until .sendMonitor says we can write.
  430. // Once we're ready we send more than we asked for,
  431. // but amortized it should even out.
  432. c.sendMonitor.Limit(c._maxPacketMsgSize, atomic.LoadInt64(&c.config.SendRate), true)
  433. // Now send some PacketMsgs.
  434. for i := 0; i < numBatchPacketMsgs; i++ {
  435. if c.sendPacketMsg() {
  436. return true
  437. }
  438. }
  439. return false
  440. }
  441. // Returns true if messages from channels were exhausted.
  442. func (c *MConnection) sendPacketMsg() bool {
  443. // Choose a channel to create a PacketMsg from.
  444. // The chosen channel will be the one whose recentlySent/priority is the least.
  445. var leastRatio float32 = math.MaxFloat32
  446. var leastChannel *Channel
  447. for _, channel := range c.channels {
  448. // If nothing to send, skip this channel
  449. if !channel.isSendPending() {
  450. continue
  451. }
  452. // Get ratio, and keep track of lowest ratio.
  453. ratio := float32(channel.recentlySent) / float32(channel.desc.Priority)
  454. if ratio < leastRatio {
  455. leastRatio = ratio
  456. leastChannel = channel
  457. }
  458. }
  459. // Nothing to send?
  460. if leastChannel == nil {
  461. return true
  462. }
  463. // c.Logger.Info("Found a msgPacket to send")
  464. // Make & send a PacketMsg from this channel
  465. _n, err := leastChannel.writePacketMsgTo(c.bufConnWriter)
  466. if err != nil {
  467. c.Logger.Error("Failed to write PacketMsg", "err", err)
  468. c.stopForError(err)
  469. return true
  470. }
  471. c.sendMonitor.Update(_n)
  472. c.flushTimer.Set()
  473. return false
  474. }
  475. // recvRoutine reads PacketMsgs and reconstructs the message using the channels' "recving" buffer.
  476. // After a whole message has been assembled, it's pushed to onReceive().
  477. // Blocks depending on how the connection is throttled.
  478. // Otherwise, it never blocks.
  479. func (c *MConnection) recvRoutine() {
  480. defer c._recover()
  481. protoReader := protoio.NewDelimitedReader(c.bufConnReader, c._maxPacketMsgSize)
  482. FOR_LOOP:
  483. for {
  484. // Block until .recvMonitor says we can read.
  485. c.recvMonitor.Limit(c._maxPacketMsgSize, atomic.LoadInt64(&c.config.RecvRate), true)
  486. // Peek into bufConnReader for debugging
  487. /*
  488. if numBytes := c.bufConnReader.Buffered(); numBytes > 0 {
  489. bz, err := c.bufConnReader.Peek(tmmath.MinInt(numBytes, 100))
  490. if err == nil {
  491. // return
  492. } else {
  493. c.Logger.Debug("Error peeking connection buffer", "err", err)
  494. // return nil
  495. }
  496. c.Logger.Info("Peek connection buffer", "numBytes", numBytes, "bz", bz)
  497. }
  498. */
  499. // Read packet type
  500. var packet tmp2p.Packet
  501. err := protoReader.ReadMsg(&packet)
  502. if err != nil {
  503. // stopServices was invoked and we are shutting down
  504. // receiving is excpected to fail since we will close the connection
  505. select {
  506. case <-c.quitRecvRoutine:
  507. break FOR_LOOP
  508. default:
  509. }
  510. if c.IsRunning() {
  511. if err == io.EOF {
  512. c.Logger.Info("Connection is closed @ recvRoutine (likely by the other side)", "conn", c)
  513. } else {
  514. c.Logger.Debug("Connection failed @ recvRoutine (reading byte)", "conn", c, "err", err)
  515. }
  516. c.stopForError(err)
  517. }
  518. break FOR_LOOP
  519. }
  520. // Read more depending on packet type.
  521. switch pkt := packet.Sum.(type) {
  522. case *tmp2p.Packet_PacketPing:
  523. // TODO: prevent abuse, as they cause flush()'s.
  524. // https://github.com/tendermint/tendermint/issues/1190
  525. c.Logger.Debug("Receive Ping")
  526. select {
  527. case c.pong <- struct{}{}:
  528. default:
  529. // never block
  530. }
  531. case *tmp2p.Packet_PacketPong:
  532. c.Logger.Debug("Receive Pong")
  533. select {
  534. case c.pongTimeoutCh <- false:
  535. default:
  536. // never block
  537. }
  538. case *tmp2p.Packet_PacketMsg:
  539. channel, ok := c.channelsIdx[byte(pkt.PacketMsg.ChannelID)]
  540. if !ok || channel == nil {
  541. err := fmt.Errorf("unknown channel %X", pkt.PacketMsg.ChannelID)
  542. c.Logger.Debug("Connection failed @ recvRoutine", "conn", c, "err", err)
  543. c.stopForError(err)
  544. break FOR_LOOP
  545. }
  546. msgBytes, err := channel.recvPacketMsg(*pkt.PacketMsg)
  547. if err != nil {
  548. if c.IsRunning() {
  549. c.Logger.Debug("Connection failed @ recvRoutine", "conn", c, "err", err)
  550. c.stopForError(err)
  551. }
  552. break FOR_LOOP
  553. }
  554. if msgBytes != nil {
  555. c.Logger.Debug("Received bytes", "chID", pkt.PacketMsg.ChannelID, "msgBytes", msgBytes)
  556. // NOTE: This means the reactor.Receive runs in the same thread as the p2p recv routine
  557. c.onReceive(byte(pkt.PacketMsg.ChannelID), msgBytes)
  558. }
  559. default:
  560. err := fmt.Errorf("unknown message type %v", reflect.TypeOf(packet))
  561. c.Logger.Error("Connection failed @ recvRoutine", "conn", c, "err", err)
  562. c.stopForError(err)
  563. break FOR_LOOP
  564. }
  565. }
  566. // Cleanup
  567. close(c.pong)
  568. for range c.pong {
  569. // Drain
  570. }
  571. }
  572. // not goroutine-safe
  573. func (c *MConnection) stopPongTimer() {
  574. if c.pongTimer != nil {
  575. _ = c.pongTimer.Stop()
  576. c.pongTimer = nil
  577. }
  578. }
  579. // maxPacketMsgSize returns a maximum size of PacketMsg
  580. func (c *MConnection) maxPacketMsgSize() int {
  581. bz, err := proto.Marshal(mustWrapPacket(&tmp2p.PacketMsg{
  582. ChannelID: 0x01,
  583. EOF: true,
  584. Data: make([]byte, c.config.MaxPacketMsgPayloadSize),
  585. }))
  586. if err != nil {
  587. panic(err)
  588. }
  589. return len(bz)
  590. }
  591. type ConnectionStatus struct {
  592. Duration time.Duration
  593. SendMonitor flow.Status
  594. RecvMonitor flow.Status
  595. Channels []ChannelStatus
  596. }
  597. type ChannelStatus struct {
  598. ID byte
  599. SendQueueCapacity int
  600. SendQueueSize int
  601. Priority int
  602. RecentlySent int64
  603. }
  604. func (c *MConnection) Status() ConnectionStatus {
  605. var status ConnectionStatus
  606. status.Duration = time.Since(c.created)
  607. status.SendMonitor = c.sendMonitor.Status()
  608. status.RecvMonitor = c.recvMonitor.Status()
  609. status.Channels = make([]ChannelStatus, len(c.channels))
  610. for i, channel := range c.channels {
  611. status.Channels[i] = ChannelStatus{
  612. ID: channel.desc.ID,
  613. SendQueueCapacity: cap(channel.sendQueue),
  614. SendQueueSize: int(atomic.LoadInt32(&channel.sendQueueSize)),
  615. Priority: channel.desc.Priority,
  616. RecentlySent: atomic.LoadInt64(&channel.recentlySent),
  617. }
  618. }
  619. return status
  620. }
  621. //-----------------------------------------------------------------------------
  622. type ChannelDescriptor struct {
  623. ID byte
  624. Priority int
  625. SendQueueCapacity int
  626. RecvBufferCapacity int
  627. RecvMessageCapacity int
  628. }
  629. func (chDesc ChannelDescriptor) FillDefaults() (filled ChannelDescriptor) {
  630. if chDesc.SendQueueCapacity == 0 {
  631. chDesc.SendQueueCapacity = defaultSendQueueCapacity
  632. }
  633. if chDesc.RecvBufferCapacity == 0 {
  634. chDesc.RecvBufferCapacity = defaultRecvBufferCapacity
  635. }
  636. if chDesc.RecvMessageCapacity == 0 {
  637. chDesc.RecvMessageCapacity = defaultRecvMessageCapacity
  638. }
  639. filled = chDesc
  640. return
  641. }
  642. // TODO: lowercase.
  643. // NOTE: not goroutine-safe.
  644. type Channel struct {
  645. conn *MConnection
  646. desc ChannelDescriptor
  647. sendQueue chan []byte
  648. sendQueueSize int32 // atomic.
  649. recving []byte
  650. sending []byte
  651. recentlySent int64 // exponential moving average
  652. maxPacketMsgPayloadSize int
  653. Logger log.Logger
  654. }
  655. func newChannel(conn *MConnection, desc ChannelDescriptor) *Channel {
  656. desc = desc.FillDefaults()
  657. if desc.Priority <= 0 {
  658. panic("Channel default priority must be a positive integer")
  659. }
  660. return &Channel{
  661. conn: conn,
  662. desc: desc,
  663. sendQueue: make(chan []byte, desc.SendQueueCapacity),
  664. recving: make([]byte, 0, desc.RecvBufferCapacity),
  665. maxPacketMsgPayloadSize: conn.config.MaxPacketMsgPayloadSize,
  666. }
  667. }
  668. func (ch *Channel) SetLogger(l log.Logger) {
  669. ch.Logger = l
  670. }
  671. // Queues message to send to this channel.
  672. // Goroutine-safe
  673. // Times out (and returns false) after defaultSendTimeout
  674. func (ch *Channel) sendBytes(bytes []byte) bool {
  675. select {
  676. case ch.sendQueue <- bytes:
  677. atomic.AddInt32(&ch.sendQueueSize, 1)
  678. return true
  679. case <-time.After(defaultSendTimeout):
  680. return false
  681. }
  682. }
  683. // Queues message to send to this channel.
  684. // Nonblocking, returns true if successful.
  685. // Goroutine-safe
  686. func (ch *Channel) trySendBytes(bytes []byte) bool {
  687. select {
  688. case ch.sendQueue <- bytes:
  689. atomic.AddInt32(&ch.sendQueueSize, 1)
  690. return true
  691. default:
  692. return false
  693. }
  694. }
  695. // Goroutine-safe
  696. func (ch *Channel) loadSendQueueSize() (size int) {
  697. return int(atomic.LoadInt32(&ch.sendQueueSize))
  698. }
  699. // Goroutine-safe
  700. // Use only as a heuristic.
  701. func (ch *Channel) canSend() bool {
  702. return ch.loadSendQueueSize() < defaultSendQueueCapacity
  703. }
  704. // Returns true if any PacketMsgs are pending to be sent.
  705. // Call before calling nextPacketMsg()
  706. // Goroutine-safe
  707. func (ch *Channel) isSendPending() bool {
  708. if len(ch.sending) == 0 {
  709. if len(ch.sendQueue) == 0 {
  710. return false
  711. }
  712. ch.sending = <-ch.sendQueue
  713. }
  714. return true
  715. }
  716. // Creates a new PacketMsg to send.
  717. // Not goroutine-safe
  718. func (ch *Channel) nextPacketMsg() tmp2p.PacketMsg {
  719. packet := tmp2p.PacketMsg{ChannelID: int32(ch.desc.ID)}
  720. maxSize := ch.maxPacketMsgPayloadSize
  721. packet.Data = ch.sending[:tmmath.MinInt(maxSize, len(ch.sending))]
  722. if len(ch.sending) <= maxSize {
  723. packet.EOF = true
  724. ch.sending = nil
  725. atomic.AddInt32(&ch.sendQueueSize, -1) // decrement sendQueueSize
  726. } else {
  727. packet.EOF = false
  728. ch.sending = ch.sending[tmmath.MinInt(maxSize, len(ch.sending)):]
  729. }
  730. return packet
  731. }
  732. // Writes next PacketMsg to w and updates c.recentlySent.
  733. // Not goroutine-safe
  734. func (ch *Channel) writePacketMsgTo(w io.Writer) (n int, err error) {
  735. packet := ch.nextPacketMsg()
  736. n, err = protoio.NewDelimitedWriter(w).WriteMsg(mustWrapPacket(&packet))
  737. atomic.AddInt64(&ch.recentlySent, int64(n))
  738. return
  739. }
  740. // Handles incoming PacketMsgs. It returns a message bytes if message is
  741. // complete, which is owned by the caller and will not be modified.
  742. // Not goroutine-safe
  743. func (ch *Channel) recvPacketMsg(packet tmp2p.PacketMsg) ([]byte, error) {
  744. ch.Logger.Debug("Read PacketMsg", "conn", ch.conn, "packet", packet)
  745. var recvCap, recvReceived = ch.desc.RecvMessageCapacity, len(ch.recving) + len(packet.Data)
  746. if recvCap < recvReceived {
  747. return nil, fmt.Errorf("received message exceeds available capacity: %v < %v", recvCap, recvReceived)
  748. }
  749. ch.recving = append(ch.recving, packet.Data...)
  750. if packet.EOF {
  751. msgBytes := ch.recving
  752. ch.recving = make([]byte, 0, ch.desc.RecvBufferCapacity)
  753. return msgBytes, nil
  754. }
  755. return nil, nil
  756. }
  757. // Call this periodically to update stats for throttling purposes.
  758. // Not goroutine-safe
  759. func (ch *Channel) updateStats() {
  760. // Exponential decay of stats.
  761. // TODO: optimize.
  762. atomic.StoreInt64(&ch.recentlySent, int64(float64(atomic.LoadInt64(&ch.recentlySent))*0.8))
  763. }
  764. //----------------------------------------
  765. // Packet
  766. // mustWrapPacket takes a packet kind (oneof) and wraps it in a tmp2p.Packet message.
  767. func mustWrapPacket(pb proto.Message) *tmp2p.Packet {
  768. var msg tmp2p.Packet
  769. switch pb := pb.(type) {
  770. case *tmp2p.Packet: // already a packet
  771. msg = *pb
  772. case *tmp2p.PacketPing:
  773. msg = tmp2p.Packet{
  774. Sum: &tmp2p.Packet_PacketPing{
  775. PacketPing: pb,
  776. },
  777. }
  778. case *tmp2p.PacketPong:
  779. msg = tmp2p.Packet{
  780. Sum: &tmp2p.Packet_PacketPong{
  781. PacketPong: pb,
  782. },
  783. }
  784. case *tmp2p.PacketMsg:
  785. msg = tmp2p.Packet{
  786. Sum: &tmp2p.Packet_PacketMsg{
  787. PacketMsg: pb,
  788. },
  789. }
  790. default:
  791. panic(fmt.Errorf("unknown packet type %T", pb))
  792. }
  793. return &msg
  794. }