You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

141 lines
3.8 KiB

p2p: make PeerManager.DialNext() and EvictNext() block (#5947) See #5936 and #5938 for background. The plan was initially to have `DialNext()` and `EvictNext()` return a channel. However, implementing this became unnecessarily complicated and error-prone. As an example, the channel would be both consumed and populated (via method calls) by the same driving method (e.g. `Router.dialPeers()`) which could easily cause deadlocks where a method call blocked while sending on the channel that the caller itself was responsible for consuming (but couldn't since it was busy making the method call). It would also require a set of goroutines in the peer manager that would interact with the goroutines in the router in non-obvious ways, and fully populating the channel on startup could cause deadlocks with other startup tasks. Several issues like these made the solution hard to reason about. I therefore simply made `DialNext()` and `EvictNext()` block until the next peer was available, using internal triggers to wake these methods up in a non-blocking fashion when any relevant state changes occurred. This proved much simpler to reason about, since there are no goroutines in the peer manager (except for trivial retry timers), nor any blocking channel sends, and it instead relies entirely on the existing goroutine structure of the router for concurrency. This also happens to be the same pattern used by the `Transport.Accept()` API, following Go stdlib conventions, so all router goroutines end up using a consistent pattern as well.
3 years ago
p2p: make PeerManager.DialNext() and EvictNext() block (#5947) See #5936 and #5938 for background. The plan was initially to have `DialNext()` and `EvictNext()` return a channel. However, implementing this became unnecessarily complicated and error-prone. As an example, the channel would be both consumed and populated (via method calls) by the same driving method (e.g. `Router.dialPeers()`) which could easily cause deadlocks where a method call blocked while sending on the channel that the caller itself was responsible for consuming (but couldn't since it was busy making the method call). It would also require a set of goroutines in the peer manager that would interact with the goroutines in the router in non-obvious ways, and fully populating the channel on startup could cause deadlocks with other startup tasks. Several issues like these made the solution hard to reason about. I therefore simply made `DialNext()` and `EvictNext()` block until the next peer was available, using internal triggers to wake these methods up in a non-blocking fashion when any relevant state changes occurred. This proved much simpler to reason about, since there are no goroutines in the peer manager (except for trivial retry timers), nor any blocking channel sends, and it instead relies entirely on the existing goroutine structure of the router for concurrency. This also happens to be the same pattern used by the `Transport.Accept()` API, following Go stdlib conventions, so all router goroutines end up using a consistent pattern as well.
3 years ago
p2p: make PeerManager.DialNext() and EvictNext() block (#5947) See #5936 and #5938 for background. The plan was initially to have `DialNext()` and `EvictNext()` return a channel. However, implementing this became unnecessarily complicated and error-prone. As an example, the channel would be both consumed and populated (via method calls) by the same driving method (e.g. `Router.dialPeers()`) which could easily cause deadlocks where a method call blocked while sending on the channel that the caller itself was responsible for consuming (but couldn't since it was busy making the method call). It would also require a set of goroutines in the peer manager that would interact with the goroutines in the router in non-obvious ways, and fully populating the channel on startup could cause deadlocks with other startup tasks. Several issues like these made the solution hard to reason about. I therefore simply made `DialNext()` and `EvictNext()` block until the next peer was available, using internal triggers to wake these methods up in a non-blocking fashion when any relevant state changes occurred. This proved much simpler to reason about, since there are no goroutines in the peer manager (except for trivial retry timers), nor any blocking channel sends, and it instead relies entirely on the existing goroutine structure of the router for concurrency. This also happens to be the same pattern used by the `Transport.Accept()` API, following Go stdlib conventions, so all router goroutines end up using a consistent pattern as well.
3 years ago
p2p: make PeerManager.DialNext() and EvictNext() block (#5947) See #5936 and #5938 for background. The plan was initially to have `DialNext()` and `EvictNext()` return a channel. However, implementing this became unnecessarily complicated and error-prone. As an example, the channel would be both consumed and populated (via method calls) by the same driving method (e.g. `Router.dialPeers()`) which could easily cause deadlocks where a method call blocked while sending on the channel that the caller itself was responsible for consuming (but couldn't since it was busy making the method call). It would also require a set of goroutines in the peer manager that would interact with the goroutines in the router in non-obvious ways, and fully populating the channel on startup could cause deadlocks with other startup tasks. Several issues like these made the solution hard to reason about. I therefore simply made `DialNext()` and `EvictNext()` block until the next peer was available, using internal triggers to wake these methods up in a non-blocking fashion when any relevant state changes occurred. This proved much simpler to reason about, since there are no goroutines in the peer manager (except for trivial retry timers), nor any blocking channel sends, and it instead relies entirely on the existing goroutine structure of the router for concurrency. This also happens to be the same pattern used by the `Transport.Accept()` API, following Go stdlib conventions, so all router goroutines end up using a consistent pattern as well.
3 years ago
p2p: make PeerManager.DialNext() and EvictNext() block (#5947) See #5936 and #5938 for background. The plan was initially to have `DialNext()` and `EvictNext()` return a channel. However, implementing this became unnecessarily complicated and error-prone. As an example, the channel would be both consumed and populated (via method calls) by the same driving method (e.g. `Router.dialPeers()`) which could easily cause deadlocks where a method call blocked while sending on the channel that the caller itself was responsible for consuming (but couldn't since it was busy making the method call). It would also require a set of goroutines in the peer manager that would interact with the goroutines in the router in non-obvious ways, and fully populating the channel on startup could cause deadlocks with other startup tasks. Several issues like these made the solution hard to reason about. I therefore simply made `DialNext()` and `EvictNext()` block until the next peer was available, using internal triggers to wake these methods up in a non-blocking fashion when any relevant state changes occurred. This proved much simpler to reason about, since there are no goroutines in the peer manager (except for trivial retry timers), nor any blocking channel sends, and it instead relies entirely on the existing goroutine structure of the router for concurrency. This also happens to be the same pattern used by the `Transport.Accept()` API, following Go stdlib conventions, so all router goroutines end up using a consistent pattern as well.
3 years ago
p2p: make PeerManager.DialNext() and EvictNext() block (#5947) See #5936 and #5938 for background. The plan was initially to have `DialNext()` and `EvictNext()` return a channel. However, implementing this became unnecessarily complicated and error-prone. As an example, the channel would be both consumed and populated (via method calls) by the same driving method (e.g. `Router.dialPeers()`) which could easily cause deadlocks where a method call blocked while sending on the channel that the caller itself was responsible for consuming (but couldn't since it was busy making the method call). It would also require a set of goroutines in the peer manager that would interact with the goroutines in the router in non-obvious ways, and fully populating the channel on startup could cause deadlocks with other startup tasks. Several issues like these made the solution hard to reason about. I therefore simply made `DialNext()` and `EvictNext()` block until the next peer was available, using internal triggers to wake these methods up in a non-blocking fashion when any relevant state changes occurred. This proved much simpler to reason about, since there are no goroutines in the peer manager (except for trivial retry timers), nor any blocking channel sends, and it instead relies entirely on the existing goroutine structure of the router for concurrency. This also happens to be the same pattern used by the `Transport.Accept()` API, following Go stdlib conventions, so all router goroutines end up using a consistent pattern as well.
3 years ago
p2p: make PeerManager.DialNext() and EvictNext() block (#5947) See #5936 and #5938 for background. The plan was initially to have `DialNext()` and `EvictNext()` return a channel. However, implementing this became unnecessarily complicated and error-prone. As an example, the channel would be both consumed and populated (via method calls) by the same driving method (e.g. `Router.dialPeers()`) which could easily cause deadlocks where a method call blocked while sending on the channel that the caller itself was responsible for consuming (but couldn't since it was busy making the method call). It would also require a set of goroutines in the peer manager that would interact with the goroutines in the router in non-obvious ways, and fully populating the channel on startup could cause deadlocks with other startup tasks. Several issues like these made the solution hard to reason about. I therefore simply made `DialNext()` and `EvictNext()` block until the next peer was available, using internal triggers to wake these methods up in a non-blocking fashion when any relevant state changes occurred. This proved much simpler to reason about, since there are no goroutines in the peer manager (except for trivial retry timers), nor any blocking channel sends, and it instead relies entirely on the existing goroutine structure of the router for concurrency. This also happens to be the same pattern used by the `Transport.Accept()` API, following Go stdlib conventions, so all router goroutines end up using a consistent pattern as well.
3 years ago
  1. package p2p_test
  2. import (
  3. "errors"
  4. "testing"
  5. "github.com/fortytw2/leaktest"
  6. gogotypes "github.com/gogo/protobuf/types"
  7. "github.com/stretchr/testify/assert"
  8. "github.com/stretchr/testify/require"
  9. dbm "github.com/tendermint/tm-db"
  10. "github.com/tendermint/tendermint/libs/log"
  11. "github.com/tendermint/tendermint/p2p"
  12. )
  13. type TestMessage = gogotypes.StringValue
  14. func echoReactor(channel *p2p.Channel) {
  15. for {
  16. select {
  17. case envelope := <-channel.In():
  18. channel.Out() <- p2p.Envelope{
  19. To: envelope.From,
  20. Message: &TestMessage{Value: envelope.Message.(*TestMessage).Value},
  21. }
  22. case <-channel.Done():
  23. return
  24. }
  25. }
  26. }
  27. func TestRouter(t *testing.T) {
  28. defer leaktest.Check(t)()
  29. logger := log.TestingLogger()
  30. network := p2p.NewMemoryNetwork(logger)
  31. transport := network.GenerateTransport()
  32. chID := p2p.ChannelID(1)
  33. // Start some other in-memory network nodes to communicate with, running
  34. // a simple echo reactor that returns received messages.
  35. peers := []p2p.PeerAddress{}
  36. for i := 0; i < 3; i++ {
  37. i := i
  38. peerManager, err := p2p.NewPeerManager(dbm.NewMemDB(), p2p.PeerManagerOptions{})
  39. require.NoError(t, err)
  40. peerTransport := network.GenerateTransport()
  41. peerRouter := p2p.NewRouter(
  42. logger.With("peerID", i),
  43. peerManager,
  44. map[p2p.Protocol]p2p.Transport{
  45. p2p.MemoryProtocol: peerTransport,
  46. },
  47. )
  48. peers = append(peers, peerTransport.Endpoints()[0].PeerAddress())
  49. channel, err := peerRouter.OpenChannel(chID, &TestMessage{})
  50. require.NoError(t, err)
  51. defer channel.Close()
  52. go echoReactor(channel)
  53. err = peerRouter.Start()
  54. require.NoError(t, err)
  55. defer func() { require.NoError(t, peerRouter.Stop()) }()
  56. }
  57. // Start the main router and connect it to the peers above.
  58. peerManager, err := p2p.NewPeerManager(dbm.NewMemDB(), p2p.PeerManagerOptions{})
  59. require.NoError(t, err)
  60. defer peerManager.Close()
  61. for _, address := range peers {
  62. err := peerManager.Add(address)
  63. require.NoError(t, err)
  64. }
  65. peerUpdates := peerManager.Subscribe()
  66. defer peerUpdates.Close()
  67. router := p2p.NewRouter(logger, peerManager, map[p2p.Protocol]p2p.Transport{
  68. p2p.MemoryProtocol: transport,
  69. })
  70. channel, err := router.OpenChannel(chID, &TestMessage{})
  71. require.NoError(t, err)
  72. defer channel.Close()
  73. err = router.Start()
  74. require.NoError(t, err)
  75. defer func() {
  76. // Since earlier defers are closed after this, and we have to make sure
  77. // we close channels and subscriptions before the router, we explicitly
  78. // close them here to.
  79. peerUpdates.Close()
  80. channel.Close()
  81. require.NoError(t, router.Stop())
  82. }()
  83. // Wait for peers to come online, and ping them as they do.
  84. for i := 0; i < len(peers); i++ {
  85. peerUpdate := <-peerUpdates.Updates()
  86. peerID := peerUpdate.PeerID
  87. require.Equal(t, p2p.PeerUpdate{
  88. PeerID: peerID,
  89. Status: p2p.PeerStatusUp,
  90. }, peerUpdate)
  91. channel.Out() <- p2p.Envelope{To: peerID, Message: &TestMessage{Value: "hi!"}}
  92. assert.Equal(t, p2p.Envelope{
  93. From: peerID,
  94. Message: &TestMessage{Value: "hi!"},
  95. }, (<-channel.In()).Strip())
  96. }
  97. // We then submit an error for a peer, and watch it get disconnected.
  98. channel.Error() <- p2p.PeerError{
  99. PeerID: peers[0].ID,
  100. Err: errors.New("test error"),
  101. Severity: p2p.PeerErrorSeverityCritical,
  102. }
  103. peerUpdate := <-peerUpdates.Updates()
  104. require.Equal(t, p2p.PeerUpdate{
  105. PeerID: peers[0].ID,
  106. Status: p2p.PeerStatusDown,
  107. }, peerUpdate)
  108. // We now broadcast a message, which we should receive back from only two peers.
  109. channel.Out() <- p2p.Envelope{
  110. Broadcast: true,
  111. Message: &TestMessage{Value: "broadcast"},
  112. }
  113. for i := 0; i < len(peers)-1; i++ {
  114. envelope := <-channel.In()
  115. require.NotEqual(t, peers[0].ID, envelope.From)
  116. require.Equal(t, &TestMessage{Value: "broadcast"}, envelope.Message)
  117. }
  118. select {
  119. case envelope := <-channel.In():
  120. t.Errorf("unexpected message: %v", envelope)
  121. default:
  122. }
  123. }