You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

739 lines
20 KiB

build: Make sure to test packages with external tests (#7608) The test filter was looking for "TestGoFiles", which does not include tests in a separate package (e.g., "package foo_test" for "package foo"). This caused several packages not to be tested in CI, including: github.com/tendermint/tendermint/abci/client github.com/tendermint/tendermint/crypto github.com/tendermint/tendermint/crypto/tmhash github.com/tendermint/tendermint/internal/eventbus github.com/tendermint/tendermint/internal/evidence github.com/tendermint/tendermint/internal/inspect github.com/tendermint/tendermint/internal/jsontypes github.com/tendermint/tendermint/internal/libs/protoio github.com/tendermint/tendermint/internal/libs/sync github.com/tendermint/tendermint/internal/p2p/pex github.com/tendermint/tendermint/internal/pubsub github.com/tendermint/tendermint/internal/pubsub/query github.com/tendermint/tendermint/internal/pubsub/query/syntax github.com/tendermint/tendermint/internal/state/indexer github.com/tendermint/tendermint/internal/state/indexer/block/kv github.com/tendermint/tendermint/libs/json github.com/tendermint/tendermint/libs/log github.com/tendermint/tendermint/libs/os github.com/tendermint/tendermint/light github.com/tendermint/tendermint/light/provider/http github.com/tendermint/tendermint/privval/grpc github.com/tendermint/tendermint/proto/tendermint/blocksync github.com/tendermint/tendermint/proto/tendermint/consensus github.com/tendermint/tendermint/proto/tendermint/statesync github.com/tendermint/tendermint/rpc/client github.com/tendermint/tendermint/rpc/client/mock github.com/tendermint/tendermint/test/e2e/tests github.com/tendermint/tendermint/test/fuzz/mempool github.com/tendermint/tendermint/test/fuzz/p2p/secretconnection github.com/tendermint/tendermint/test/fuzz/rpc/jsonrpc/server Updates #7626 and #7634.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
  1. // Temporarily disabled pending ttps://github.com/tendermint/tendermint/issues/7626.
  2. //go:build issue7626
  3. package pex_test
  4. import (
  5. "context"
  6. "errors"
  7. "strings"
  8. "testing"
  9. "time"
  10. "github.com/stretchr/testify/require"
  11. dbm "github.com/tendermint/tm-db"
  12. "github.com/tendermint/tendermint/crypto/ed25519"
  13. "github.com/tendermint/tendermint/internal/p2p"
  14. "github.com/tendermint/tendermint/internal/p2p/p2ptest"
  15. "github.com/tendermint/tendermint/internal/p2p/pex"
  16. "github.com/tendermint/tendermint/libs/log"
  17. p2pproto "github.com/tendermint/tendermint/proto/tendermint/p2p"
  18. "github.com/tendermint/tendermint/types"
  19. )
  20. const (
  21. checkFrequency = 500 * time.Millisecond
  22. defaultBufferSize = 2
  23. shortWait = 10 * time.Second
  24. longWait = 60 * time.Second
  25. firstNode = 0
  26. secondNode = 1
  27. thirdNode = 2
  28. )
  29. func TestReactorBasic(t *testing.T) {
  30. ctx, cancel := context.WithCancel(context.Background())
  31. defer cancel()
  32. // start a network with one mock reactor and one "real" reactor
  33. testNet := setupNetwork(ctx, t, testOptions{
  34. MockNodes: 1,
  35. TotalNodes: 2,
  36. })
  37. testNet.connectAll(ctx, t)
  38. testNet.start(ctx, t)
  39. // assert that the mock node receives a request from the real node
  40. testNet.listenForRequest(ctx, t, secondNode, firstNode, shortWait)
  41. // assert that when a mock node sends a request it receives a response (and
  42. // the correct one)
  43. testNet.sendRequest(ctx, t, firstNode, secondNode)
  44. testNet.listenForResponse(ctx, t, secondNode, firstNode, shortWait, []p2pproto.PexAddress(nil))
  45. }
  46. func TestReactorConnectFullNetwork(t *testing.T) {
  47. ctx, cancel := context.WithCancel(context.Background())
  48. defer cancel()
  49. testNet := setupNetwork(ctx, t, testOptions{
  50. TotalNodes: 4,
  51. })
  52. // make every node be only connected with one other node (it actually ends up
  53. // being two because of two way connections but oh well)
  54. testNet.connectN(ctx, t, 1)
  55. testNet.start(ctx, t)
  56. // assert that all nodes add each other in the network
  57. for idx := 0; idx < len(testNet.nodes); idx++ {
  58. testNet.requireNumberOfPeers(t, idx, len(testNet.nodes)-1, longWait)
  59. }
  60. }
  61. func TestReactorSendsRequestsTooOften(t *testing.T) {
  62. ctx, cancel := context.WithCancel(context.Background())
  63. defer cancel()
  64. r := setupSingle(ctx, t)
  65. badNode := newNodeID(t, "b")
  66. r.pexInCh <- p2p.Envelope{
  67. From: badNode,
  68. Message: &p2pproto.PexRequest{},
  69. }
  70. resp := <-r.pexOutCh
  71. msg, ok := resp.Message.(*p2pproto.PexResponse)
  72. require.True(t, ok)
  73. require.Empty(t, msg.Addresses)
  74. r.pexInCh <- p2p.Envelope{
  75. From: badNode,
  76. Message: &p2pproto.PexRequest{},
  77. }
  78. peerErr := <-r.pexErrCh
  79. require.Error(t, peerErr.Err)
  80. require.Empty(t, r.pexOutCh)
  81. require.Contains(t, peerErr.Err.Error(), "peer sent a request too close after a prior one")
  82. require.Equal(t, badNode, peerErr.NodeID)
  83. }
  84. func TestReactorSendsResponseWithoutRequest(t *testing.T) {
  85. ctx, cancel := context.WithCancel(context.Background())
  86. defer cancel()
  87. testNet := setupNetwork(ctx, t, testOptions{
  88. MockNodes: 1,
  89. TotalNodes: 3,
  90. })
  91. testNet.connectAll(ctx, t)
  92. testNet.start(ctx, t)
  93. // firstNode sends the secondNode an unrequested response
  94. // NOTE: secondNode will send a request by default during startup so we send
  95. // two responses to counter that.
  96. testNet.sendResponse(ctx, t, firstNode, secondNode, []int{thirdNode})
  97. testNet.sendResponse(ctx, t, firstNode, secondNode, []int{thirdNode})
  98. // secondNode should evict the firstNode
  99. testNet.listenForPeerUpdate(ctx, t, secondNode, firstNode, p2p.PeerStatusDown, shortWait)
  100. }
  101. func TestReactorNeverSendsTooManyPeers(t *testing.T) {
  102. ctx, cancel := context.WithCancel(context.Background())
  103. defer cancel()
  104. testNet := setupNetwork(ctx, t, testOptions{
  105. MockNodes: 1,
  106. TotalNodes: 2,
  107. })
  108. testNet.connectAll(ctx, t)
  109. testNet.start(ctx, t)
  110. testNet.addNodes(ctx, t, 110)
  111. nodes := make([]int, 110)
  112. for i := 0; i < len(nodes); i++ {
  113. nodes[i] = i + 2
  114. }
  115. testNet.addAddresses(t, secondNode, nodes)
  116. // first we check that even although we have 110 peers, honest pex reactors
  117. // only send 100 (test if secondNode sends firstNode 100 addresses)
  118. testNet.pingAndlistenForNAddresses(ctx, t, secondNode, firstNode, shortWait, 100)
  119. }
  120. func TestReactorErrorsOnReceivingTooManyPeers(t *testing.T) {
  121. ctx, cancel := context.WithCancel(context.Background())
  122. defer cancel()
  123. r := setupSingle(ctx, t)
  124. peer := p2p.NodeAddress{Protocol: p2p.MemoryProtocol, NodeID: randomNodeID(t)}
  125. added, err := r.manager.Add(peer)
  126. require.NoError(t, err)
  127. require.True(t, added)
  128. addresses := make([]p2pproto.PexAddress, 101)
  129. for i := 0; i < len(addresses); i++ {
  130. nodeAddress := p2p.NodeAddress{Protocol: p2p.MemoryProtocol, NodeID: randomNodeID(t)}
  131. addresses[i] = p2pproto.PexAddress{
  132. URL: nodeAddress.String(),
  133. }
  134. }
  135. r.peerCh <- p2p.PeerUpdate{
  136. NodeID: peer.NodeID,
  137. Status: p2p.PeerStatusUp,
  138. }
  139. select {
  140. // wait for a request and then send a response with too many addresses
  141. case req := <-r.pexOutCh:
  142. if _, ok := req.Message.(*p2pproto.PexRequest); !ok {
  143. t.Fatal("expected v2 pex request")
  144. }
  145. r.pexInCh <- p2p.Envelope{
  146. From: peer.NodeID,
  147. Message: &p2pproto.PexResponse{
  148. Addresses: addresses,
  149. },
  150. }
  151. case <-time.After(10 * time.Second):
  152. t.Fatal("pex failed to send a request within 10 seconds")
  153. }
  154. peerErr := <-r.pexErrCh
  155. require.Error(t, peerErr.Err)
  156. require.Empty(t, r.pexOutCh)
  157. require.Contains(t, peerErr.Err.Error(), "peer sent too many addresses")
  158. require.Equal(t, peer.NodeID, peerErr.NodeID)
  159. }
  160. func TestReactorSmallPeerStoreInALargeNetwork(t *testing.T) {
  161. ctx, cancel := context.WithCancel(context.Background())
  162. defer cancel()
  163. testNet := setupNetwork(ctx, t, testOptions{
  164. TotalNodes: 8,
  165. MaxPeers: 4,
  166. MaxConnected: 3,
  167. BufferSize: 8,
  168. })
  169. testNet.connectN(ctx, t, 1)
  170. testNet.start(ctx, t)
  171. // test that all nodes reach full capacity
  172. for _, nodeID := range testNet.nodes {
  173. require.Eventually(t, func() bool {
  174. // nolint:scopelint
  175. return testNet.network.Nodes[nodeID].PeerManager.PeerRatio() >= 0.9
  176. }, longWait, checkFrequency)
  177. }
  178. }
  179. func TestReactorLargePeerStoreInASmallNetwork(t *testing.T) {
  180. ctx, cancel := context.WithCancel(context.Background())
  181. defer cancel()
  182. testNet := setupNetwork(ctx, t, testOptions{
  183. TotalNodes: 3,
  184. MaxPeers: 25,
  185. MaxConnected: 25,
  186. BufferSize: 5,
  187. })
  188. testNet.connectN(ctx, t, 1)
  189. testNet.start(ctx, t)
  190. // assert that all nodes add each other in the network
  191. for idx := 0; idx < len(testNet.nodes); idx++ {
  192. testNet.requireNumberOfPeers(t, idx, len(testNet.nodes)-1, longWait)
  193. }
  194. }
  195. func TestReactorWithNetworkGrowth(t *testing.T) {
  196. ctx, cancel := context.WithCancel(context.Background())
  197. defer cancel()
  198. testNet := setupNetwork(ctx, t, testOptions{
  199. TotalNodes: 5,
  200. BufferSize: 5,
  201. })
  202. testNet.connectAll(ctx, t)
  203. testNet.start(ctx, t)
  204. // assert that all nodes add each other in the network
  205. for idx := 0; idx < len(testNet.nodes); idx++ {
  206. testNet.requireNumberOfPeers(t, idx, len(testNet.nodes)-1, shortWait)
  207. }
  208. // now we inject 10 more nodes
  209. testNet.addNodes(ctx, t, 10)
  210. for i := 5; i < testNet.total; i++ {
  211. node := testNet.nodes[i]
  212. require.NoError(t, testNet.reactors[node].Start(ctx))
  213. require.True(t, testNet.reactors[node].IsRunning())
  214. // we connect all new nodes to a single entry point and check that the
  215. // node can distribute the addresses to all the others
  216. testNet.connectPeers(ctx, t, 0, i)
  217. }
  218. require.Len(t, testNet.reactors, 15)
  219. // assert that all nodes add each other in the network
  220. for idx := 0; idx < len(testNet.nodes); idx++ {
  221. testNet.requireNumberOfPeers(t, idx, len(testNet.nodes)-1, longWait)
  222. }
  223. }
  224. type singleTestReactor struct {
  225. reactor *pex.Reactor
  226. pexInCh chan p2p.Envelope
  227. pexOutCh chan p2p.Envelope
  228. pexErrCh chan p2p.PeerError
  229. pexCh *p2p.Channel
  230. peerCh chan p2p.PeerUpdate
  231. manager *p2p.PeerManager
  232. }
  233. func setupSingle(ctx context.Context, t *testing.T) *singleTestReactor {
  234. t.Helper()
  235. nodeID := newNodeID(t, "a")
  236. chBuf := 2
  237. pexInCh := make(chan p2p.Envelope, chBuf)
  238. pexOutCh := make(chan p2p.Envelope, chBuf)
  239. pexErrCh := make(chan p2p.PeerError, chBuf)
  240. pexCh := p2p.NewChannel(
  241. p2p.ChannelID(pex.PexChannel),
  242. new(p2pproto.PexMessage),
  243. pexInCh,
  244. pexOutCh,
  245. pexErrCh,
  246. )
  247. peerCh := make(chan p2p.PeerUpdate, chBuf)
  248. peerUpdates := p2p.NewPeerUpdates(peerCh, chBuf)
  249. peerManager, err := p2p.NewPeerManager(nodeID, dbm.NewMemDB(), p2p.PeerManagerOptions{})
  250. require.NoError(t, err)
  251. chCreator := func(context.Context, *p2p.ChannelDescriptor) (*p2p.Channel, error) {
  252. return pexCh, nil
  253. }
  254. reactor, err := pex.NewReactor(ctx, log.TestingLogger(), peerManager, chCreator, peerUpdates)
  255. require.NoError(t, err)
  256. require.NoError(t, reactor.Start(ctx))
  257. t.Cleanup(reactor.Wait)
  258. return &singleTestReactor{
  259. reactor: reactor,
  260. pexInCh: pexInCh,
  261. pexOutCh: pexOutCh,
  262. pexErrCh: pexErrCh,
  263. pexCh: pexCh,
  264. peerCh: peerCh,
  265. manager: peerManager,
  266. }
  267. }
  268. type reactorTestSuite struct {
  269. network *p2ptest.Network
  270. logger log.Logger
  271. reactors map[types.NodeID]*pex.Reactor
  272. pexChannels map[types.NodeID]*p2p.Channel
  273. peerChans map[types.NodeID]chan p2p.PeerUpdate
  274. peerUpdates map[types.NodeID]*p2p.PeerUpdates
  275. nodes []types.NodeID
  276. mocks []types.NodeID
  277. total int
  278. opts testOptions
  279. }
  280. type testOptions struct {
  281. MockNodes int
  282. TotalNodes int
  283. BufferSize int
  284. MaxPeers uint16
  285. MaxConnected uint16
  286. }
  287. // setup setups a test suite with a network of nodes. Mocknodes represent the
  288. // hollow nodes that the test can listen and send on
  289. func setupNetwork(ctx context.Context, t *testing.T, opts testOptions) *reactorTestSuite {
  290. t.Helper()
  291. require.Greater(t, opts.TotalNodes, opts.MockNodes)
  292. if opts.BufferSize == 0 {
  293. opts.BufferSize = defaultBufferSize
  294. }
  295. networkOpts := p2ptest.NetworkOptions{
  296. NumNodes: opts.TotalNodes,
  297. BufferSize: opts.BufferSize,
  298. NodeOpts: p2ptest.NodeOptions{
  299. MaxPeers: opts.MaxPeers,
  300. MaxConnected: opts.MaxConnected,
  301. },
  302. }
  303. chBuf := opts.BufferSize
  304. realNodes := opts.TotalNodes - opts.MockNodes
  305. rts := &reactorTestSuite{
  306. logger: log.TestingLogger().With("testCase", t.Name()),
  307. network: p2ptest.MakeNetwork(ctx, t, networkOpts),
  308. reactors: make(map[types.NodeID]*pex.Reactor, realNodes),
  309. pexChannels: make(map[types.NodeID]*p2p.Channel, opts.TotalNodes),
  310. peerChans: make(map[types.NodeID]chan p2p.PeerUpdate, opts.TotalNodes),
  311. peerUpdates: make(map[types.NodeID]*p2p.PeerUpdates, opts.TotalNodes),
  312. total: opts.TotalNodes,
  313. opts: opts,
  314. }
  315. // NOTE: we don't assert that the channels get drained after stopping the
  316. // reactor
  317. rts.pexChannels = rts.network.MakeChannelsNoCleanup(ctx, t, pex.ChannelDescriptor())
  318. idx := 0
  319. for nodeID := range rts.network.Nodes {
  320. rts.peerChans[nodeID] = make(chan p2p.PeerUpdate, chBuf)
  321. rts.peerUpdates[nodeID] = p2p.NewPeerUpdates(rts.peerChans[nodeID], chBuf)
  322. rts.network.Nodes[nodeID].PeerManager.Register(ctx, rts.peerUpdates[nodeID])
  323. chCreator := func(context.Context, *p2p.ChannelDescriptor) (*p2p.Channel, error) {
  324. return rts.pexChannels[nodeID], nil
  325. }
  326. // the first nodes in the array are always mock nodes
  327. if idx < opts.MockNodes {
  328. rts.mocks = append(rts.mocks, nodeID)
  329. } else {
  330. var err error
  331. rts.reactors[nodeID], err = pex.NewReactor(
  332. ctx,
  333. rts.logger.With("nodeID", nodeID),
  334. rts.network.Nodes[nodeID].PeerManager,
  335. chCreator,
  336. rts.peerUpdates[nodeID],
  337. )
  338. require.NoError(t, err)
  339. }
  340. rts.nodes = append(rts.nodes, nodeID)
  341. idx++
  342. }
  343. require.Len(t, rts.reactors, realNodes)
  344. t.Cleanup(func() {
  345. for _, reactor := range rts.reactors {
  346. if reactor.IsRunning() {
  347. reactor.Wait()
  348. require.False(t, reactor.IsRunning())
  349. }
  350. }
  351. })
  352. return rts
  353. }
  354. // starts up the pex reactors for each node
  355. func (r *reactorTestSuite) start(ctx context.Context, t *testing.T) {
  356. t.Helper()
  357. for _, reactor := range r.reactors {
  358. require.NoError(t, reactor.Start(ctx))
  359. require.True(t, reactor.IsRunning())
  360. }
  361. }
  362. func (r *reactorTestSuite) addNodes(ctx context.Context, t *testing.T, nodes int) {
  363. t.Helper()
  364. for i := 0; i < nodes; i++ {
  365. node := r.network.MakeNode(ctx, t, p2ptest.NodeOptions{
  366. MaxPeers: r.opts.MaxPeers,
  367. MaxConnected: r.opts.MaxConnected,
  368. })
  369. r.network.Nodes[node.NodeID] = node
  370. nodeID := node.NodeID
  371. r.pexChannels[nodeID] = node.MakeChannelNoCleanup(ctx, t, pex.ChannelDescriptor())
  372. r.peerChans[nodeID] = make(chan p2p.PeerUpdate, r.opts.BufferSize)
  373. r.peerUpdates[nodeID] = p2p.NewPeerUpdates(r.peerChans[nodeID], r.opts.BufferSize)
  374. r.network.Nodes[nodeID].PeerManager.Register(ctx, r.peerUpdates[nodeID])
  375. chCreator := func(context.Context, *p2p.ChannelDescriptor) (*p2p.Channel, error) {
  376. return r.pexChannels[nodeID], nil
  377. }
  378. var err error
  379. r.reactors[nodeID], err = pex.NewReactor(
  380. ctx,
  381. r.logger.With("nodeID", nodeID),
  382. r.network.Nodes[nodeID].PeerManager,
  383. chCreator,
  384. r.peerUpdates[nodeID],
  385. )
  386. require.NoError(t, err)
  387. r.nodes = append(r.nodes, nodeID)
  388. r.total++
  389. }
  390. }
  391. func (r *reactorTestSuite) listenFor(
  392. ctx context.Context,
  393. t *testing.T,
  394. node types.NodeID,
  395. conditional func(msg *p2p.Envelope) bool,
  396. assertion func(t *testing.T, msg *p2p.Envelope) bool,
  397. waitPeriod time.Duration,
  398. ) {
  399. ctx, cancel := context.WithTimeout(ctx, waitPeriod)
  400. defer cancel()
  401. iter := r.pexChannels[node].Receive(ctx)
  402. for iter.Next(ctx) {
  403. envelope := iter.Envelope()
  404. if conditional(envelope) && assertion(t, envelope) {
  405. return
  406. }
  407. }
  408. if errors.Is(ctx.Err(), context.DeadlineExceeded) {
  409. require.Fail(t, "timed out waiting for message",
  410. "node=%v, waitPeriod=%s", node, waitPeriod)
  411. }
  412. }
  413. func (r *reactorTestSuite) listenForRequest(ctx context.Context, t *testing.T, fromNode, toNode int, waitPeriod time.Duration) {
  414. to, from := r.checkNodePair(t, toNode, fromNode)
  415. conditional := func(msg *p2p.Envelope) bool {
  416. _, ok := msg.Message.(*p2pproto.PexRequest)
  417. return ok && msg.From == from
  418. }
  419. assertion := func(t *testing.T, msg *p2p.Envelope) bool {
  420. require.Equal(t, &p2pproto.PexRequest{}, msg.Message)
  421. return true
  422. }
  423. r.listenFor(ctx, t, to, conditional, assertion, waitPeriod)
  424. }
  425. func (r *reactorTestSuite) pingAndlistenForNAddresses(
  426. ctx context.Context,
  427. t *testing.T,
  428. fromNode, toNode int,
  429. waitPeriod time.Duration,
  430. addresses int,
  431. ) {
  432. t.Helper()
  433. to, from := r.checkNodePair(t, toNode, fromNode)
  434. conditional := func(msg *p2p.Envelope) bool {
  435. _, ok := msg.Message.(*p2pproto.PexResponse)
  436. return ok && msg.From == from
  437. }
  438. assertion := func(t *testing.T, msg *p2p.Envelope) bool {
  439. m, ok := msg.Message.(*p2pproto.PexResponse)
  440. if !ok {
  441. require.Fail(t, "expected pex response v2")
  442. return true
  443. }
  444. // assert the same amount of addresses
  445. if len(m.Addresses) == addresses {
  446. return true
  447. }
  448. // if we didn't get the right length, we wait and send the
  449. // request again
  450. time.Sleep(300 * time.Millisecond)
  451. r.sendRequest(ctx, t, toNode, fromNode)
  452. return false
  453. }
  454. r.sendRequest(ctx, t, toNode, fromNode)
  455. r.listenFor(ctx, t, to, conditional, assertion, waitPeriod)
  456. }
  457. func (r *reactorTestSuite) listenForResponse(
  458. ctx context.Context,
  459. t *testing.T,
  460. fromNode, toNode int,
  461. waitPeriod time.Duration,
  462. addresses []p2pproto.PexAddress,
  463. ) {
  464. to, from := r.checkNodePair(t, toNode, fromNode)
  465. conditional := func(msg *p2p.Envelope) bool {
  466. _, ok := msg.Message.(*p2pproto.PexResponse)
  467. return ok && msg.From == from
  468. }
  469. assertion := func(t *testing.T, msg *p2p.Envelope) bool {
  470. require.Equal(t, &p2pproto.PexResponse{Addresses: addresses}, msg.Message)
  471. return true
  472. }
  473. r.listenFor(ctx, t, to, conditional, assertion, waitPeriod)
  474. }
  475. func (r *reactorTestSuite) listenForPeerUpdate(
  476. ctx context.Context,
  477. t *testing.T,
  478. onNode, withNode int,
  479. status p2p.PeerStatus,
  480. waitPeriod time.Duration,
  481. ) {
  482. on, with := r.checkNodePair(t, onNode, withNode)
  483. sub := r.network.Nodes[on].PeerManager.Subscribe(ctx)
  484. timesUp := time.After(waitPeriod)
  485. for {
  486. select {
  487. case <-ctx.Done():
  488. require.Fail(t, "operation canceled")
  489. return
  490. case peerUpdate := <-sub.Updates():
  491. if peerUpdate.NodeID == with {
  492. require.Equal(t, status, peerUpdate.Status)
  493. return
  494. }
  495. case <-timesUp:
  496. require.Fail(t, "timed out waiting for peer status", "%v with status %v",
  497. with, status)
  498. return
  499. }
  500. }
  501. }
  502. func (r *reactorTestSuite) getAddressesFor(nodes []int) []p2pproto.PexAddress {
  503. addresses := make([]p2pproto.PexAddress, len(nodes))
  504. for idx, node := range nodes {
  505. nodeID := r.nodes[node]
  506. addresses[idx] = p2pproto.PexAddress{
  507. URL: r.network.Nodes[nodeID].NodeAddress.String(),
  508. }
  509. }
  510. return addresses
  511. }
  512. func (r *reactorTestSuite) sendRequest(ctx context.Context, t *testing.T, fromNode, toNode int) {
  513. t.Helper()
  514. to, from := r.checkNodePair(t, toNode, fromNode)
  515. require.NoError(t, r.pexChannels[from].Send(ctx, p2p.Envelope{
  516. To: to,
  517. Message: &p2pproto.PexRequest{},
  518. }))
  519. }
  520. func (r *reactorTestSuite) sendResponse(
  521. ctx context.Context,
  522. t *testing.T,
  523. fromNode, toNode int,
  524. withNodes []int,
  525. ) {
  526. t.Helper()
  527. from, to := r.checkNodePair(t, fromNode, toNode)
  528. addrs := r.getAddressesFor(withNodes)
  529. require.NoError(t, r.pexChannels[from].Send(ctx, p2p.Envelope{
  530. To: to,
  531. Message: &p2pproto.PexResponse{
  532. Addresses: addrs,
  533. },
  534. }))
  535. }
  536. func (r *reactorTestSuite) requireNumberOfPeers(
  537. t *testing.T,
  538. nodeIndex, numPeers int,
  539. waitPeriod time.Duration,
  540. ) {
  541. t.Helper()
  542. require.Eventuallyf(t, func() bool {
  543. actualNumPeers := len(r.network.Nodes[r.nodes[nodeIndex]].PeerManager.Peers())
  544. return actualNumPeers >= numPeers
  545. }, waitPeriod, checkFrequency, "peer failed to connect with the asserted amount of peers "+
  546. "index=%d, node=%q, waitPeriod=%s expected=%d actual=%d",
  547. nodeIndex, r.nodes[nodeIndex], waitPeriod, numPeers,
  548. len(r.network.Nodes[r.nodes[nodeIndex]].PeerManager.Peers()),
  549. )
  550. }
  551. func (r *reactorTestSuite) connectAll(ctx context.Context, t *testing.T) {
  552. r.connectN(ctx, t, r.total-1)
  553. }
  554. // connects all nodes with n other nodes
  555. func (r *reactorTestSuite) connectN(ctx context.Context, t *testing.T, n int) {
  556. if n >= r.total {
  557. require.Fail(t, "connectN: n must be less than the size of the network - 1")
  558. }
  559. for i := 0; i < r.total; i++ {
  560. for j := 0; j < n; j++ {
  561. r.connectPeers(ctx, t, i, (i+j+1)%r.total)
  562. }
  563. }
  564. }
  565. // connects node1 to node2
  566. func (r *reactorTestSuite) connectPeers(ctx context.Context, t *testing.T, sourceNode, targetNode int) {
  567. t.Helper()
  568. node1, node2 := r.checkNodePair(t, sourceNode, targetNode)
  569. n1 := r.network.Nodes[node1]
  570. if n1 == nil {
  571. require.Fail(t, "connectPeers: source node %v is not part of the testnet", node1)
  572. return
  573. }
  574. n2 := r.network.Nodes[node2]
  575. if n2 == nil {
  576. require.Fail(t, "connectPeers: target node %v is not part of the testnet", node2)
  577. return
  578. }
  579. sourceSub := n1.PeerManager.Subscribe(ctx)
  580. targetSub := n2.PeerManager.Subscribe(ctx)
  581. sourceAddress := n1.NodeAddress
  582. targetAddress := n2.NodeAddress
  583. added, err := n1.PeerManager.Add(targetAddress)
  584. require.NoError(t, err)
  585. if !added {
  586. return
  587. }
  588. select {
  589. case peerUpdate := <-targetSub.Updates():
  590. require.Equal(t, p2p.PeerUpdate{
  591. NodeID: node1,
  592. Status: p2p.PeerStatusUp,
  593. }, peerUpdate)
  594. case <-time.After(2 * time.Second):
  595. require.Fail(t, "timed out waiting for peer", "%v accepting %v",
  596. targetNode, sourceNode)
  597. }
  598. select {
  599. case peerUpdate := <-sourceSub.Updates():
  600. require.Equal(t, p2p.PeerUpdate{
  601. NodeID: node2,
  602. Status: p2p.PeerStatusUp,
  603. }, peerUpdate)
  604. case <-time.After(2 * time.Second):
  605. require.Fail(t, "timed out waiting for peer", "%v dialing %v",
  606. sourceNode, targetNode)
  607. }
  608. added, err = n2.PeerManager.Add(sourceAddress)
  609. require.NoError(t, err)
  610. require.True(t, added)
  611. }
  612. func (r *reactorTestSuite) checkNodePair(t *testing.T, first, second int) (types.NodeID, types.NodeID) {
  613. require.NotEqual(t, first, second)
  614. require.Less(t, first, r.total)
  615. require.Less(t, second, r.total)
  616. return r.nodes[first], r.nodes[second]
  617. }
  618. func (r *reactorTestSuite) addAddresses(t *testing.T, node int, addrs []int) {
  619. peerManager := r.network.Nodes[r.nodes[node]].PeerManager
  620. for _, addr := range addrs {
  621. require.Less(t, addr, r.total)
  622. address := r.network.Nodes[r.nodes[addr]].NodeAddress
  623. added, err := peerManager.Add(address)
  624. require.NoError(t, err)
  625. require.True(t, added)
  626. }
  627. }
  628. func newNodeID(t *testing.T, id string) types.NodeID {
  629. nodeID, err := types.NewNodeID(strings.Repeat(id, 2*types.NodeIDByteLength))
  630. require.NoError(t, err)
  631. return nodeID
  632. }
  633. func randomNodeID(t *testing.T) types.NodeID {
  634. return types.NodeIDFromPubKey(ed25519.GenPrivKey().PubKey())
  635. }