You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

911 lines
24 KiB

p2p: implement new Transport interface (#5791) This implements a new `Transport` interface and related types for the P2P refactor in #5670. Previously, `conn.MConnection` was very tightly coupled to the `Peer` implementation -- in order to allow alternative non-multiplexed transports (e.g. QUIC), MConnection has now been moved below the `Transport` interface, as `MConnTransport`, and decoupled from the peer. Since the `p2p` package is not covered by our Go API stability, this is not considered a breaking change, and not listed in the changelog. The initial approach was to implement the new interface in its final form (which also involved possible protocol changes, see https://github.com/tendermint/spec/pull/227). However, it turned out that this would require a large amount of changes to existing P2P code because of the previous tight coupling between `Peer` and `MConnection` and the reliance on subtleties in the MConnection behavior. Instead, I have broadened the `Transport` interface to expose much of the existing MConnection interface, preserved much of the existing MConnection logic and behavior in the transport implementation, and tried to make as few changes to the rest of the P2P stack as possible. We will instead reduce this interface gradually as we refactor other parts of the P2P stack. The low-level transport code and protocol (e.g. MConnection, SecretConnection and so on) has not been significantly changed, and refactoring this is not a priority until we come up with a plan for QUIC adoption, as we may end up discarding the MConnection code entirely. There are no tests of the new `MConnTransport`, as this code is likely to evolve as we proceed with the P2P refactor, but tests should be added before a final release. The E2E tests are sufficient for basic validation in the meanwhile.
4 years ago
p2p: implement new Transport interface (#5791) This implements a new `Transport` interface and related types for the P2P refactor in #5670. Previously, `conn.MConnection` was very tightly coupled to the `Peer` implementation -- in order to allow alternative non-multiplexed transports (e.g. QUIC), MConnection has now been moved below the `Transport` interface, as `MConnTransport`, and decoupled from the peer. Since the `p2p` package is not covered by our Go API stability, this is not considered a breaking change, and not listed in the changelog. The initial approach was to implement the new interface in its final form (which also involved possible protocol changes, see https://github.com/tendermint/spec/pull/227). However, it turned out that this would require a large amount of changes to existing P2P code because of the previous tight coupling between `Peer` and `MConnection` and the reliance on subtleties in the MConnection behavior. Instead, I have broadened the `Transport` interface to expose much of the existing MConnection interface, preserved much of the existing MConnection logic and behavior in the transport implementation, and tried to make as few changes to the rest of the P2P stack as possible. We will instead reduce this interface gradually as we refactor other parts of the P2P stack. The low-level transport code and protocol (e.g. MConnection, SecretConnection and so on) has not been significantly changed, and refactoring this is not a priority until we come up with a plan for QUIC adoption, as we may end up discarding the MConnection code entirely. There are no tests of the new `MConnTransport`, as this code is likely to evolve as we proceed with the P2P refactor, but tests should be added before a final release. The E2E tests are sufficient for basic validation in the meanwhile.
4 years ago
  1. package conn
  2. import (
  3. "bufio"
  4. "errors"
  5. "fmt"
  6. "io"
  7. "math"
  8. "net"
  9. "reflect"
  10. "runtime/debug"
  11. "sync/atomic"
  12. "time"
  13. "github.com/gogo/protobuf/proto"
  14. flow "github.com/tendermint/tendermint/libs/flowrate"
  15. "github.com/tendermint/tendermint/libs/log"
  16. tmmath "github.com/tendermint/tendermint/libs/math"
  17. "github.com/tendermint/tendermint/libs/protoio"
  18. "github.com/tendermint/tendermint/libs/service"
  19. tmsync "github.com/tendermint/tendermint/libs/sync"
  20. "github.com/tendermint/tendermint/libs/timer"
  21. tmp2p "github.com/tendermint/tendermint/proto/tendermint/p2p"
  22. )
  23. const (
  24. // mirrors MaxPacketMsgPayloadSize from config/config.go
  25. defaultMaxPacketMsgPayloadSize = 1400
  26. numBatchPacketMsgs = 10
  27. minReadBufferSize = 1024
  28. minWriteBufferSize = 65536
  29. updateStats = 2 * time.Second
  30. // some of these defaults are written in the user config
  31. // flushThrottle, sendRate, recvRate
  32. // TODO: remove values present in config
  33. defaultFlushThrottle = 100 * time.Millisecond
  34. defaultSendQueueCapacity = 1
  35. defaultRecvBufferCapacity = 4096
  36. defaultRecvMessageCapacity = 22020096 // 21MB
  37. defaultSendRate = int64(512000) // 500KB/s
  38. defaultRecvRate = int64(512000) // 500KB/s
  39. defaultSendTimeout = 10 * time.Second
  40. defaultPingInterval = 60 * time.Second
  41. defaultPongTimeout = 45 * time.Second
  42. )
  43. type receiveCbFunc func(chID byte, msgBytes []byte)
  44. type errorCbFunc func(interface{})
  45. /*
  46. Each peer has one `MConnection` (multiplex connection) instance.
  47. __multiplex__ *noun* a system or signal involving simultaneous transmission of
  48. several messages along a single channel of communication.
  49. Each `MConnection` handles message transmission on multiple abstract communication
  50. `Channel`s. Each channel has a globally unique byte id.
  51. The byte id and the relative priorities of each `Channel` are configured upon
  52. initialization of the connection.
  53. There are two methods for sending messages:
  54. func (m MConnection) Send(chID byte, msgBytes []byte) bool {}
  55. func (m MConnection) TrySend(chID byte, msgBytes []byte}) bool {}
  56. `Send(chID, msgBytes)` is a blocking call that waits until `msg` is
  57. successfully queued for the channel with the given id byte `chID`, or until the
  58. request times out. The message `msg` is serialized using Protobuf.
  59. `TrySend(chID, msgBytes)` is a nonblocking call that returns false if the
  60. channel's queue is full.
  61. Inbound message bytes are handled with an onReceive callback function.
  62. */
  63. type MConnection struct {
  64. service.BaseService
  65. conn net.Conn
  66. bufConnReader *bufio.Reader
  67. bufConnWriter *bufio.Writer
  68. sendMonitor *flow.Monitor
  69. recvMonitor *flow.Monitor
  70. send chan struct{}
  71. pong chan struct{}
  72. channels []*Channel
  73. channelsIdx map[byte]*Channel
  74. onReceive receiveCbFunc
  75. onError errorCbFunc
  76. errored uint32
  77. config MConnConfig
  78. // Closing quitSendRoutine will cause the sendRoutine to eventually quit.
  79. // doneSendRoutine is closed when the sendRoutine actually quits.
  80. quitSendRoutine chan struct{}
  81. doneSendRoutine chan struct{}
  82. // Closing quitRecvRouting will cause the recvRouting to eventually quit.
  83. quitRecvRoutine chan struct{}
  84. // used to ensure FlushStop and OnStop
  85. // are safe to call concurrently.
  86. stopMtx tmsync.Mutex
  87. flushTimer *timer.ThrottleTimer // flush writes as necessary but throttled.
  88. pingTimer *time.Ticker // send pings periodically
  89. // close conn if pong is not received in pongTimeout
  90. pongTimer *time.Timer
  91. pongTimeoutCh chan bool // true - timeout, false - peer sent pong
  92. chStatsTimer *time.Ticker // update channel stats periodically
  93. created time.Time // time of creation
  94. _maxPacketMsgSize int
  95. }
  96. // MConnConfig is a MConnection configuration.
  97. type MConnConfig struct {
  98. SendRate int64 `mapstructure:"send_rate"`
  99. RecvRate int64 `mapstructure:"recv_rate"`
  100. // Maximum payload size
  101. MaxPacketMsgPayloadSize int `mapstructure:"max_packet_msg_payload_size"`
  102. // Interval to flush writes (throttled)
  103. FlushThrottle time.Duration `mapstructure:"flush_throttle"`
  104. // Interval to send pings
  105. PingInterval time.Duration `mapstructure:"ping_interval"`
  106. // Maximum wait time for pongs
  107. PongTimeout time.Duration `mapstructure:"pong_timeout"`
  108. }
  109. // DefaultMConnConfig returns the default config.
  110. func DefaultMConnConfig() MConnConfig {
  111. return MConnConfig{
  112. SendRate: defaultSendRate,
  113. RecvRate: defaultRecvRate,
  114. MaxPacketMsgPayloadSize: defaultMaxPacketMsgPayloadSize,
  115. FlushThrottle: defaultFlushThrottle,
  116. PingInterval: defaultPingInterval,
  117. PongTimeout: defaultPongTimeout,
  118. }
  119. }
  120. // NewMConnection wraps net.Conn and creates multiplex connection
  121. func NewMConnection(
  122. conn net.Conn,
  123. chDescs []*ChannelDescriptor,
  124. onReceive receiveCbFunc,
  125. onError errorCbFunc,
  126. ) *MConnection {
  127. return NewMConnectionWithConfig(
  128. conn,
  129. chDescs,
  130. onReceive,
  131. onError,
  132. DefaultMConnConfig())
  133. }
  134. // NewMConnectionWithConfig wraps net.Conn and creates multiplex connection with a config
  135. func NewMConnectionWithConfig(
  136. conn net.Conn,
  137. chDescs []*ChannelDescriptor,
  138. onReceive receiveCbFunc,
  139. onError errorCbFunc,
  140. config MConnConfig,
  141. ) *MConnection {
  142. if config.PongTimeout >= config.PingInterval {
  143. panic("pongTimeout must be less than pingInterval (otherwise, next ping will reset pong timer)")
  144. }
  145. mconn := &MConnection{
  146. conn: conn,
  147. bufConnReader: bufio.NewReaderSize(conn, minReadBufferSize),
  148. bufConnWriter: bufio.NewWriterSize(conn, minWriteBufferSize),
  149. sendMonitor: flow.New(0, 0),
  150. recvMonitor: flow.New(0, 0),
  151. send: make(chan struct{}, 1),
  152. pong: make(chan struct{}, 1),
  153. onReceive: onReceive,
  154. onError: onError,
  155. config: config,
  156. created: time.Now(),
  157. }
  158. // Create channels
  159. var channelsIdx = map[byte]*Channel{}
  160. var channels = []*Channel{}
  161. for _, desc := range chDescs {
  162. channel := newChannel(mconn, *desc)
  163. channelsIdx[channel.desc.ID] = channel
  164. channels = append(channels, channel)
  165. }
  166. mconn.channels = channels
  167. mconn.channelsIdx = channelsIdx
  168. mconn.BaseService = *service.NewBaseService(nil, "MConnection", mconn)
  169. // maxPacketMsgSize() is a bit heavy, so call just once
  170. mconn._maxPacketMsgSize = mconn.maxPacketMsgSize()
  171. return mconn
  172. }
  173. func (c *MConnection) SetLogger(l log.Logger) {
  174. c.BaseService.SetLogger(l)
  175. for _, ch := range c.channels {
  176. ch.SetLogger(l)
  177. }
  178. }
  179. // OnStart implements BaseService
  180. func (c *MConnection) OnStart() error {
  181. if err := c.BaseService.OnStart(); err != nil {
  182. return err
  183. }
  184. c.flushTimer = timer.NewThrottleTimer("flush", c.config.FlushThrottle)
  185. c.pingTimer = time.NewTicker(c.config.PingInterval)
  186. c.pongTimeoutCh = make(chan bool, 1)
  187. c.chStatsTimer = time.NewTicker(updateStats)
  188. c.quitSendRoutine = make(chan struct{})
  189. c.doneSendRoutine = make(chan struct{})
  190. c.quitRecvRoutine = make(chan struct{})
  191. go c.sendRoutine()
  192. go c.recvRoutine()
  193. return nil
  194. }
  195. // stopServices stops the BaseService and timers and closes the quitSendRoutine.
  196. // if the quitSendRoutine was already closed, it returns true, otherwise it returns false.
  197. // It uses the stopMtx to ensure only one of FlushStop and OnStop can do this at a time.
  198. func (c *MConnection) stopServices() (alreadyStopped bool) {
  199. c.stopMtx.Lock()
  200. defer c.stopMtx.Unlock()
  201. select {
  202. case <-c.quitSendRoutine:
  203. // already quit
  204. return true
  205. default:
  206. }
  207. select {
  208. case <-c.quitRecvRoutine:
  209. // already quit
  210. return true
  211. default:
  212. }
  213. c.BaseService.OnStop()
  214. c.flushTimer.Stop()
  215. c.pingTimer.Stop()
  216. c.chStatsTimer.Stop()
  217. // inform the recvRouting that we are shutting down
  218. close(c.quitRecvRoutine)
  219. close(c.quitSendRoutine)
  220. return false
  221. }
  222. // FlushStop replicates the logic of OnStop.
  223. // It additionally ensures that all successful
  224. // .Send() calls will get flushed before closing
  225. // the connection.
  226. func (c *MConnection) FlushStop() {
  227. if c.stopServices() {
  228. return
  229. }
  230. // this block is unique to FlushStop
  231. {
  232. // wait until the sendRoutine exits
  233. // so we dont race on calling sendSomePacketMsgs
  234. <-c.doneSendRoutine
  235. // Send and flush all pending msgs.
  236. // Since sendRoutine has exited, we can call this
  237. // safely
  238. eof := c.sendSomePacketMsgs()
  239. for !eof {
  240. eof = c.sendSomePacketMsgs()
  241. }
  242. c.flush()
  243. // Now we can close the connection
  244. }
  245. c.conn.Close()
  246. // We can't close pong safely here because
  247. // recvRoutine may write to it after we've stopped.
  248. // Though it doesn't need to get closed at all,
  249. // we close it @ recvRoutine.
  250. // c.Stop()
  251. }
  252. // OnStop implements BaseService
  253. func (c *MConnection) OnStop() {
  254. if c.stopServices() {
  255. return
  256. }
  257. c.conn.Close()
  258. // We can't close pong safely here because
  259. // recvRoutine may write to it after we've stopped.
  260. // Though it doesn't need to get closed at all,
  261. // we close it @ recvRoutine.
  262. }
  263. func (c *MConnection) String() string {
  264. return fmt.Sprintf("MConn{%v}", c.conn.RemoteAddr())
  265. }
  266. func (c *MConnection) flush() {
  267. c.Logger.Debug("Flush", "conn", c)
  268. err := c.bufConnWriter.Flush()
  269. if err != nil {
  270. c.Logger.Debug("MConnection flush failed", "err", err)
  271. }
  272. }
  273. // Catch panics, usually caused by remote disconnects.
  274. func (c *MConnection) _recover() {
  275. if r := recover(); r != nil {
  276. c.Logger.Error("MConnection panicked", "err", r, "stack", string(debug.Stack()))
  277. c.stopForError(fmt.Errorf("recovered from panic: %v", r))
  278. }
  279. }
  280. func (c *MConnection) stopForError(r interface{}) {
  281. if err := c.Stop(); err != nil {
  282. c.Logger.Error("Error stopping connection", "err", err)
  283. }
  284. if atomic.CompareAndSwapUint32(&c.errored, 0, 1) {
  285. if c.onError != nil {
  286. c.onError(r)
  287. }
  288. }
  289. }
  290. // Queues a message to be sent to channel.
  291. func (c *MConnection) Send(chID byte, msgBytes []byte) bool {
  292. if !c.IsRunning() {
  293. return false
  294. }
  295. c.Logger.Debug("Send", "channel", chID, "conn", c, "msgBytes", msgBytes)
  296. // Send message to channel.
  297. channel, ok := c.channelsIdx[chID]
  298. if !ok {
  299. c.Logger.Error(fmt.Sprintf("Cannot send bytes, unknown channel %X", chID))
  300. return false
  301. }
  302. success := channel.sendBytes(msgBytes)
  303. if success {
  304. // Wake up sendRoutine if necessary
  305. select {
  306. case c.send <- struct{}{}:
  307. default:
  308. }
  309. } else {
  310. c.Logger.Debug("Send failed", "channel", chID, "conn", c, "msgBytes", msgBytes)
  311. }
  312. return success
  313. }
  314. // Queues a message to be sent to channel.
  315. // Nonblocking, returns true if successful.
  316. func (c *MConnection) TrySend(chID byte, msgBytes []byte) bool {
  317. if !c.IsRunning() {
  318. return false
  319. }
  320. c.Logger.Debug("TrySend", "channel", chID, "conn", c, "msgBytes", msgBytes)
  321. // Send message to channel.
  322. channel, ok := c.channelsIdx[chID]
  323. if !ok {
  324. c.Logger.Error(fmt.Sprintf("Cannot send bytes, unknown channel %X", chID))
  325. return false
  326. }
  327. ok = channel.trySendBytes(msgBytes)
  328. if ok {
  329. // Wake up sendRoutine if necessary
  330. select {
  331. case c.send <- struct{}{}:
  332. default:
  333. }
  334. }
  335. return ok
  336. }
  337. // CanSend returns true if you can send more data onto the chID, false
  338. // otherwise. Use only as a heuristic.
  339. func (c *MConnection) CanSend(chID byte) bool {
  340. if !c.IsRunning() {
  341. return false
  342. }
  343. channel, ok := c.channelsIdx[chID]
  344. if !ok {
  345. c.Logger.Error(fmt.Sprintf("Unknown channel %X", chID))
  346. return false
  347. }
  348. return channel.canSend()
  349. }
  350. // sendRoutine polls for packets to send from channels.
  351. func (c *MConnection) sendRoutine() {
  352. defer c._recover()
  353. protoWriter := protoio.NewDelimitedWriter(c.bufConnWriter)
  354. FOR_LOOP:
  355. for {
  356. var _n int
  357. var err error
  358. SELECTION:
  359. select {
  360. case <-c.flushTimer.Ch:
  361. // NOTE: flushTimer.Set() must be called every time
  362. // something is written to .bufConnWriter.
  363. c.flush()
  364. case <-c.chStatsTimer.C:
  365. for _, channel := range c.channels {
  366. channel.updateStats()
  367. }
  368. case <-c.pingTimer.C:
  369. c.Logger.Debug("Send Ping")
  370. _n, err = protoWriter.WriteMsg(mustWrapPacket(&tmp2p.PacketPing{}))
  371. if err != nil {
  372. c.Logger.Error("Failed to send PacketPing", "err", err)
  373. break SELECTION
  374. }
  375. c.sendMonitor.Update(_n)
  376. c.Logger.Debug("Starting pong timer", "dur", c.config.PongTimeout)
  377. c.pongTimer = time.AfterFunc(c.config.PongTimeout, func() {
  378. select {
  379. case c.pongTimeoutCh <- true:
  380. default:
  381. }
  382. })
  383. c.flush()
  384. case timeout := <-c.pongTimeoutCh:
  385. if timeout {
  386. c.Logger.Debug("Pong timeout")
  387. err = errors.New("pong timeout")
  388. } else {
  389. c.stopPongTimer()
  390. }
  391. case <-c.pong:
  392. c.Logger.Debug("Send Pong")
  393. _n, err = protoWriter.WriteMsg(mustWrapPacket(&tmp2p.PacketPong{}))
  394. if err != nil {
  395. c.Logger.Error("Failed to send PacketPong", "err", err)
  396. break SELECTION
  397. }
  398. c.sendMonitor.Update(_n)
  399. c.flush()
  400. case <-c.quitSendRoutine:
  401. break FOR_LOOP
  402. case <-c.send:
  403. // Send some PacketMsgs
  404. eof := c.sendSomePacketMsgs()
  405. if !eof {
  406. // Keep sendRoutine awake.
  407. select {
  408. case c.send <- struct{}{}:
  409. default:
  410. }
  411. }
  412. }
  413. if !c.IsRunning() {
  414. break FOR_LOOP
  415. }
  416. if err != nil {
  417. c.Logger.Error("Connection failed @ sendRoutine", "conn", c, "err", err)
  418. c.stopForError(err)
  419. break FOR_LOOP
  420. }
  421. }
  422. // Cleanup
  423. c.stopPongTimer()
  424. close(c.doneSendRoutine)
  425. }
  426. // Returns true if messages from channels were exhausted.
  427. // Blocks in accordance to .sendMonitor throttling.
  428. func (c *MConnection) sendSomePacketMsgs() bool {
  429. // Block until .sendMonitor says we can write.
  430. // Once we're ready we send more than we asked for,
  431. // but amortized it should even out.
  432. c.sendMonitor.Limit(c._maxPacketMsgSize, atomic.LoadInt64(&c.config.SendRate), true)
  433. // Now send some PacketMsgs.
  434. for i := 0; i < numBatchPacketMsgs; i++ {
  435. if c.sendPacketMsg() {
  436. return true
  437. }
  438. }
  439. return false
  440. }
  441. // Returns true if messages from channels were exhausted.
  442. func (c *MConnection) sendPacketMsg() bool {
  443. // Choose a channel to create a PacketMsg from.
  444. // The chosen channel will be the one whose recentlySent/priority is the least.
  445. var leastRatio float32 = math.MaxFloat32
  446. var leastChannel *Channel
  447. for _, channel := range c.channels {
  448. // If nothing to send, skip this channel
  449. if !channel.isSendPending() {
  450. continue
  451. }
  452. // Get ratio, and keep track of lowest ratio.
  453. ratio := float32(channel.recentlySent) / float32(channel.desc.Priority)
  454. if ratio < leastRatio {
  455. leastRatio = ratio
  456. leastChannel = channel
  457. }
  458. }
  459. // Nothing to send?
  460. if leastChannel == nil {
  461. return true
  462. }
  463. // c.Logger.Info("Found a msgPacket to send")
  464. // Make & send a PacketMsg from this channel
  465. _n, err := leastChannel.writePacketMsgTo(c.bufConnWriter)
  466. if err != nil {
  467. c.Logger.Error("Failed to write PacketMsg", "err", err)
  468. c.stopForError(err)
  469. return true
  470. }
  471. c.sendMonitor.Update(_n)
  472. c.flushTimer.Set()
  473. return false
  474. }
  475. // recvRoutine reads PacketMsgs and reconstructs the message using the channels' "recving" buffer.
  476. // After a whole message has been assembled, it's pushed to onReceive().
  477. // Blocks depending on how the connection is throttled.
  478. // Otherwise, it never blocks.
  479. func (c *MConnection) recvRoutine() {
  480. defer c._recover()
  481. protoReader := protoio.NewDelimitedReader(c.bufConnReader, c._maxPacketMsgSize)
  482. FOR_LOOP:
  483. for {
  484. // Block until .recvMonitor says we can read.
  485. c.recvMonitor.Limit(c._maxPacketMsgSize, atomic.LoadInt64(&c.config.RecvRate), true)
  486. // Peek into bufConnReader for debugging
  487. /*
  488. if numBytes := c.bufConnReader.Buffered(); numBytes > 0 {
  489. bz, err := c.bufConnReader.Peek(tmmath.MinInt(numBytes, 100))
  490. if err == nil {
  491. // return
  492. } else {
  493. c.Logger.Debug("Error peeking connection buffer", "err", err)
  494. // return nil
  495. }
  496. c.Logger.Info("Peek connection buffer", "numBytes", numBytes, "bz", bz)
  497. }
  498. */
  499. // Read packet type
  500. var packet tmp2p.Packet
  501. _n, err := protoReader.ReadMsg(&packet)
  502. c.recvMonitor.Update(_n)
  503. if err != nil {
  504. // stopServices was invoked and we are shutting down
  505. // receiving is excpected to fail since we will close the connection
  506. select {
  507. case <-c.quitRecvRoutine:
  508. break FOR_LOOP
  509. default:
  510. }
  511. if c.IsRunning() {
  512. if err == io.EOF {
  513. c.Logger.Info("Connection is closed @ recvRoutine (likely by the other side)", "conn", c)
  514. } else {
  515. c.Logger.Debug("Connection failed @ recvRoutine (reading byte)", "conn", c, "err", err)
  516. }
  517. c.stopForError(err)
  518. }
  519. break FOR_LOOP
  520. }
  521. // Read more depending on packet type.
  522. switch pkt := packet.Sum.(type) {
  523. case *tmp2p.Packet_PacketPing:
  524. // TODO: prevent abuse, as they cause flush()'s.
  525. // https://github.com/tendermint/tendermint/issues/1190
  526. c.Logger.Debug("Receive Ping")
  527. select {
  528. case c.pong <- struct{}{}:
  529. default:
  530. // never block
  531. }
  532. case *tmp2p.Packet_PacketPong:
  533. c.Logger.Debug("Receive Pong")
  534. select {
  535. case c.pongTimeoutCh <- false:
  536. default:
  537. // never block
  538. }
  539. case *tmp2p.Packet_PacketMsg:
  540. channel, ok := c.channelsIdx[byte(pkt.PacketMsg.ChannelID)]
  541. if !ok || channel == nil {
  542. err := fmt.Errorf("unknown channel %X", pkt.PacketMsg.ChannelID)
  543. c.Logger.Debug("Connection failed @ recvRoutine", "conn", c, "err", err)
  544. c.stopForError(err)
  545. break FOR_LOOP
  546. }
  547. msgBytes, err := channel.recvPacketMsg(*pkt.PacketMsg)
  548. if err != nil {
  549. if c.IsRunning() {
  550. c.Logger.Debug("Connection failed @ recvRoutine", "conn", c, "err", err)
  551. c.stopForError(err)
  552. }
  553. break FOR_LOOP
  554. }
  555. if msgBytes != nil {
  556. c.Logger.Debug("Received bytes", "chID", pkt.PacketMsg.ChannelID, "msgBytes", msgBytes)
  557. // NOTE: This means the reactor.Receive runs in the same thread as the p2p recv routine
  558. c.onReceive(byte(pkt.PacketMsg.ChannelID), msgBytes)
  559. }
  560. default:
  561. err := fmt.Errorf("unknown message type %v", reflect.TypeOf(packet))
  562. c.Logger.Error("Connection failed @ recvRoutine", "conn", c, "err", err)
  563. c.stopForError(err)
  564. break FOR_LOOP
  565. }
  566. }
  567. // Cleanup
  568. close(c.pong)
  569. for range c.pong {
  570. // Drain
  571. }
  572. }
  573. // not goroutine-safe
  574. func (c *MConnection) stopPongTimer() {
  575. if c.pongTimer != nil {
  576. _ = c.pongTimer.Stop()
  577. c.pongTimer = nil
  578. }
  579. }
  580. // maxPacketMsgSize returns a maximum size of PacketMsg
  581. func (c *MConnection) maxPacketMsgSize() int {
  582. bz, err := proto.Marshal(mustWrapPacket(&tmp2p.PacketMsg{
  583. ChannelID: 0x01,
  584. EOF: true,
  585. Data: make([]byte, c.config.MaxPacketMsgPayloadSize),
  586. }))
  587. if err != nil {
  588. panic(err)
  589. }
  590. return len(bz)
  591. }
  592. type ConnectionStatus struct {
  593. Duration time.Duration
  594. SendMonitor flow.Status
  595. RecvMonitor flow.Status
  596. Channels []ChannelStatus
  597. }
  598. type ChannelStatus struct {
  599. ID byte
  600. SendQueueCapacity int
  601. SendQueueSize int
  602. Priority int
  603. RecentlySent int64
  604. }
  605. func (c *MConnection) Status() ConnectionStatus {
  606. var status ConnectionStatus
  607. status.Duration = time.Since(c.created)
  608. status.SendMonitor = c.sendMonitor.Status()
  609. status.RecvMonitor = c.recvMonitor.Status()
  610. status.Channels = make([]ChannelStatus, len(c.channels))
  611. for i, channel := range c.channels {
  612. status.Channels[i] = ChannelStatus{
  613. ID: channel.desc.ID,
  614. SendQueueCapacity: cap(channel.sendQueue),
  615. SendQueueSize: int(atomic.LoadInt32(&channel.sendQueueSize)),
  616. Priority: channel.desc.Priority,
  617. RecentlySent: atomic.LoadInt64(&channel.recentlySent),
  618. }
  619. }
  620. return status
  621. }
  622. //-----------------------------------------------------------------------------
  623. type ChannelDescriptor struct {
  624. ID byte
  625. Priority int
  626. SendQueueCapacity int
  627. RecvBufferCapacity int
  628. RecvMessageCapacity int
  629. }
  630. func (chDesc ChannelDescriptor) FillDefaults() (filled ChannelDescriptor) {
  631. if chDesc.SendQueueCapacity == 0 {
  632. chDesc.SendQueueCapacity = defaultSendQueueCapacity
  633. }
  634. if chDesc.RecvBufferCapacity == 0 {
  635. chDesc.RecvBufferCapacity = defaultRecvBufferCapacity
  636. }
  637. if chDesc.RecvMessageCapacity == 0 {
  638. chDesc.RecvMessageCapacity = defaultRecvMessageCapacity
  639. }
  640. filled = chDesc
  641. return
  642. }
  643. // TODO: lowercase.
  644. // NOTE: not goroutine-safe.
  645. type Channel struct {
  646. conn *MConnection
  647. desc ChannelDescriptor
  648. sendQueue chan []byte
  649. sendQueueSize int32 // atomic.
  650. recving []byte
  651. sending []byte
  652. recentlySent int64 // exponential moving average
  653. maxPacketMsgPayloadSize int
  654. Logger log.Logger
  655. }
  656. func newChannel(conn *MConnection, desc ChannelDescriptor) *Channel {
  657. desc = desc.FillDefaults()
  658. if desc.Priority <= 0 {
  659. panic("Channel default priority must be a positive integer")
  660. }
  661. return &Channel{
  662. conn: conn,
  663. desc: desc,
  664. sendQueue: make(chan []byte, desc.SendQueueCapacity),
  665. recving: make([]byte, 0, desc.RecvBufferCapacity),
  666. maxPacketMsgPayloadSize: conn.config.MaxPacketMsgPayloadSize,
  667. }
  668. }
  669. func (ch *Channel) SetLogger(l log.Logger) {
  670. ch.Logger = l
  671. }
  672. // Queues message to send to this channel.
  673. // Goroutine-safe
  674. // Times out (and returns false) after defaultSendTimeout
  675. func (ch *Channel) sendBytes(bytes []byte) bool {
  676. select {
  677. case ch.sendQueue <- bytes:
  678. atomic.AddInt32(&ch.sendQueueSize, 1)
  679. return true
  680. case <-time.After(defaultSendTimeout):
  681. return false
  682. }
  683. }
  684. // Queues message to send to this channel.
  685. // Nonblocking, returns true if successful.
  686. // Goroutine-safe
  687. func (ch *Channel) trySendBytes(bytes []byte) bool {
  688. select {
  689. case ch.sendQueue <- bytes:
  690. atomic.AddInt32(&ch.sendQueueSize, 1)
  691. return true
  692. default:
  693. return false
  694. }
  695. }
  696. // Goroutine-safe
  697. func (ch *Channel) loadSendQueueSize() (size int) {
  698. return int(atomic.LoadInt32(&ch.sendQueueSize))
  699. }
  700. // Goroutine-safe
  701. // Use only as a heuristic.
  702. func (ch *Channel) canSend() bool {
  703. return ch.loadSendQueueSize() < defaultSendQueueCapacity
  704. }
  705. // Returns true if any PacketMsgs are pending to be sent.
  706. // Call before calling nextPacketMsg()
  707. // Goroutine-safe
  708. func (ch *Channel) isSendPending() bool {
  709. if len(ch.sending) == 0 {
  710. if len(ch.sendQueue) == 0 {
  711. return false
  712. }
  713. ch.sending = <-ch.sendQueue
  714. }
  715. return true
  716. }
  717. // Creates a new PacketMsg to send.
  718. // Not goroutine-safe
  719. func (ch *Channel) nextPacketMsg() tmp2p.PacketMsg {
  720. packet := tmp2p.PacketMsg{ChannelID: int32(ch.desc.ID)}
  721. maxSize := ch.maxPacketMsgPayloadSize
  722. packet.Data = ch.sending[:tmmath.MinInt(maxSize, len(ch.sending))]
  723. if len(ch.sending) <= maxSize {
  724. packet.EOF = true
  725. ch.sending = nil
  726. atomic.AddInt32(&ch.sendQueueSize, -1) // decrement sendQueueSize
  727. } else {
  728. packet.EOF = false
  729. ch.sending = ch.sending[tmmath.MinInt(maxSize, len(ch.sending)):]
  730. }
  731. return packet
  732. }
  733. // Writes next PacketMsg to w and updates c.recentlySent.
  734. // Not goroutine-safe
  735. func (ch *Channel) writePacketMsgTo(w io.Writer) (n int, err error) {
  736. packet := ch.nextPacketMsg()
  737. n, err = protoio.NewDelimitedWriter(w).WriteMsg(mustWrapPacket(&packet))
  738. atomic.AddInt64(&ch.recentlySent, int64(n))
  739. return
  740. }
  741. // Handles incoming PacketMsgs. It returns a message bytes if message is
  742. // complete, which is owned by the caller and will not be modified.
  743. // Not goroutine-safe
  744. func (ch *Channel) recvPacketMsg(packet tmp2p.PacketMsg) ([]byte, error) {
  745. ch.Logger.Debug("Read PacketMsg", "conn", ch.conn, "packet", packet)
  746. var recvCap, recvReceived = ch.desc.RecvMessageCapacity, len(ch.recving) + len(packet.Data)
  747. if recvCap < recvReceived {
  748. return nil, fmt.Errorf("received message exceeds available capacity: %v < %v", recvCap, recvReceived)
  749. }
  750. ch.recving = append(ch.recving, packet.Data...)
  751. if packet.EOF {
  752. msgBytes := ch.recving
  753. ch.recving = make([]byte, 0, ch.desc.RecvBufferCapacity)
  754. return msgBytes, nil
  755. }
  756. return nil, nil
  757. }
  758. // Call this periodically to update stats for throttling purposes.
  759. // Not goroutine-safe
  760. func (ch *Channel) updateStats() {
  761. // Exponential decay of stats.
  762. // TODO: optimize.
  763. atomic.StoreInt64(&ch.recentlySent, int64(float64(atomic.LoadInt64(&ch.recentlySent))*0.8))
  764. }
  765. //----------------------------------------
  766. // Packet
  767. // mustWrapPacket takes a packet kind (oneof) and wraps it in a tmp2p.Packet message.
  768. func mustWrapPacket(pb proto.Message) *tmp2p.Packet {
  769. var msg tmp2p.Packet
  770. switch pb := pb.(type) {
  771. case *tmp2p.Packet: // already a packet
  772. msg = *pb
  773. case *tmp2p.PacketPing:
  774. msg = tmp2p.Packet{
  775. Sum: &tmp2p.Packet_PacketPing{
  776. PacketPing: pb,
  777. },
  778. }
  779. case *tmp2p.PacketPong:
  780. msg = tmp2p.Packet{
  781. Sum: &tmp2p.Packet_PacketPong{
  782. PacketPong: pb,
  783. },
  784. }
  785. case *tmp2p.PacketMsg:
  786. msg = tmp2p.Packet{
  787. Sum: &tmp2p.Packet_PacketMsg{
  788. PacketMsg: pb,
  789. },
  790. }
  791. default:
  792. panic(fmt.Errorf("unknown packet type %T", pb))
  793. }
  794. return &msg
  795. }