You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

642 lines
17 KiB

blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
6 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
blocksync: fix shutdown deadlock issue (#7030) When shutting down blocksync, it is observed that the process can hang completely. A dump of running goroutines reveals that this is due to goroutines not listening on the correct shutdown signal. Namely, the `poolRoutine` goroutine does not wait on `pool.Quit`. The `poolRoutine` does not receive any other shutdown signal during `OnStop` becuase it must stop before the `r.closeCh` is closed. Currently the `poolRoutine` listens in the `closeCh` which will not close until the `poolRoutine` stops and calls `poolWG.Done()`. This change also puts the `requestRoutine()` in the `OnStart` method to make it more visible since it does not rely on anything that is spawned in the `poolRoutine`. ``` goroutine 183 [semacquire]: sync.runtime_Semacquire(0xc0000d3bd8) runtime/sema.go:56 +0x45 sync.(*WaitGroup).Wait(0xc0000d3bd0) sync/waitgroup.go:130 +0x65 github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStop(0xc0000d3a00) github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:193 +0x47 github.com/tendermint/tendermint/libs/service.(*BaseService).Stop(0xc0000d3a00, 0x0, 0x0) github.com/tendermint/tendermint/libs/service/service.go:171 +0x323 github.com/tendermint/tendermint/node.(*nodeImpl).OnStop(0xc00052c000) github.com/tendermint/tendermint/node/node.go:758 +0xc62 github.com/tendermint/tendermint/libs/service.(*BaseService).Stop(0xc00052c000, 0x0, 0x0) github.com/tendermint/tendermint/libs/service/service.go:171 +0x323 github.com/tendermint/tendermint/cmd/tendermint/commands.NewRunNodeCmd.func1.1() github.com/tendermint/tendermint/cmd/tendermint/commands/run_node.go:143 +0x62 github.com/tendermint/tendermint/libs/os.TrapSignal.func1(0xc000df6d20, 0x7f04a68da900, 0xc0004a8930, 0xc0005a72d8) github.com/tendermint/tendermint/libs/os/os.go:26 +0x102 created by github.com/tendermint/tendermint/libs/os.TrapSignal github.com/tendermint/tendermint/libs/os/os.go:22 +0xe6 goroutine 161 [select]: github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).poolRoutine(0xc0000d3a00, 0x0) github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:464 +0x2b3 created by github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStart github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:174 +0xf1 goroutine 162 [select]: github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).processBlockSyncCh(0xc0000d3a00) github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:310 +0x151 created by github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStart github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:177 +0x54 goroutine 163 [select]: github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).processPeerUpdates(0xc0000d3a00) github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:363 +0x12b created by github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStart github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:178 +0x76 ```
3 years ago
blocksync: fix shutdown deadlock issue (#7030) When shutting down blocksync, it is observed that the process can hang completely. A dump of running goroutines reveals that this is due to goroutines not listening on the correct shutdown signal. Namely, the `poolRoutine` goroutine does not wait on `pool.Quit`. The `poolRoutine` does not receive any other shutdown signal during `OnStop` becuase it must stop before the `r.closeCh` is closed. Currently the `poolRoutine` listens in the `closeCh` which will not close until the `poolRoutine` stops and calls `poolWG.Done()`. This change also puts the `requestRoutine()` in the `OnStart` method to make it more visible since it does not rely on anything that is spawned in the `poolRoutine`. ``` goroutine 183 [semacquire]: sync.runtime_Semacquire(0xc0000d3bd8) runtime/sema.go:56 +0x45 sync.(*WaitGroup).Wait(0xc0000d3bd0) sync/waitgroup.go:130 +0x65 github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStop(0xc0000d3a00) github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:193 +0x47 github.com/tendermint/tendermint/libs/service.(*BaseService).Stop(0xc0000d3a00, 0x0, 0x0) github.com/tendermint/tendermint/libs/service/service.go:171 +0x323 github.com/tendermint/tendermint/node.(*nodeImpl).OnStop(0xc00052c000) github.com/tendermint/tendermint/node/node.go:758 +0xc62 github.com/tendermint/tendermint/libs/service.(*BaseService).Stop(0xc00052c000, 0x0, 0x0) github.com/tendermint/tendermint/libs/service/service.go:171 +0x323 github.com/tendermint/tendermint/cmd/tendermint/commands.NewRunNodeCmd.func1.1() github.com/tendermint/tendermint/cmd/tendermint/commands/run_node.go:143 +0x62 github.com/tendermint/tendermint/libs/os.TrapSignal.func1(0xc000df6d20, 0x7f04a68da900, 0xc0004a8930, 0xc0005a72d8) github.com/tendermint/tendermint/libs/os/os.go:26 +0x102 created by github.com/tendermint/tendermint/libs/os.TrapSignal github.com/tendermint/tendermint/libs/os/os.go:22 +0xe6 goroutine 161 [select]: github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).poolRoutine(0xc0000d3a00, 0x0) github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:464 +0x2b3 created by github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStart github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:174 +0xf1 goroutine 162 [select]: github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).processBlockSyncCh(0xc0000d3a00) github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:310 +0x151 created by github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStart github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:177 +0x54 goroutine 163 [select]: github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).processPeerUpdates(0xc0000d3a00) github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:363 +0x12b created by github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStart github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:178 +0x76 ```
3 years ago
blocksync: fix shutdown deadlock issue (#7030) When shutting down blocksync, it is observed that the process can hang completely. A dump of running goroutines reveals that this is due to goroutines not listening on the correct shutdown signal. Namely, the `poolRoutine` goroutine does not wait on `pool.Quit`. The `poolRoutine` does not receive any other shutdown signal during `OnStop` becuase it must stop before the `r.closeCh` is closed. Currently the `poolRoutine` listens in the `closeCh` which will not close until the `poolRoutine` stops and calls `poolWG.Done()`. This change also puts the `requestRoutine()` in the `OnStart` method to make it more visible since it does not rely on anything that is spawned in the `poolRoutine`. ``` goroutine 183 [semacquire]: sync.runtime_Semacquire(0xc0000d3bd8) runtime/sema.go:56 +0x45 sync.(*WaitGroup).Wait(0xc0000d3bd0) sync/waitgroup.go:130 +0x65 github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStop(0xc0000d3a00) github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:193 +0x47 github.com/tendermint/tendermint/libs/service.(*BaseService).Stop(0xc0000d3a00, 0x0, 0x0) github.com/tendermint/tendermint/libs/service/service.go:171 +0x323 github.com/tendermint/tendermint/node.(*nodeImpl).OnStop(0xc00052c000) github.com/tendermint/tendermint/node/node.go:758 +0xc62 github.com/tendermint/tendermint/libs/service.(*BaseService).Stop(0xc00052c000, 0x0, 0x0) github.com/tendermint/tendermint/libs/service/service.go:171 +0x323 github.com/tendermint/tendermint/cmd/tendermint/commands.NewRunNodeCmd.func1.1() github.com/tendermint/tendermint/cmd/tendermint/commands/run_node.go:143 +0x62 github.com/tendermint/tendermint/libs/os.TrapSignal.func1(0xc000df6d20, 0x7f04a68da900, 0xc0004a8930, 0xc0005a72d8) github.com/tendermint/tendermint/libs/os/os.go:26 +0x102 created by github.com/tendermint/tendermint/libs/os.TrapSignal github.com/tendermint/tendermint/libs/os/os.go:22 +0xe6 goroutine 161 [select]: github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).poolRoutine(0xc0000d3a00, 0x0) github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:464 +0x2b3 created by github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStart github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:174 +0xf1 goroutine 162 [select]: github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).processBlockSyncCh(0xc0000d3a00) github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:310 +0x151 created by github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStart github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:177 +0x54 goroutine 163 [select]: github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).processPeerUpdates(0xc0000d3a00) github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:363 +0x12b created by github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStart github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:178 +0x76 ```
3 years ago
  1. package v0
  2. import (
  3. "fmt"
  4. "runtime/debug"
  5. "sync"
  6. "time"
  7. "github.com/tendermint/tendermint/internal/blocksync"
  8. "github.com/tendermint/tendermint/internal/consensus"
  9. "github.com/tendermint/tendermint/internal/p2p"
  10. sm "github.com/tendermint/tendermint/internal/state"
  11. "github.com/tendermint/tendermint/internal/store"
  12. "github.com/tendermint/tendermint/libs/log"
  13. "github.com/tendermint/tendermint/libs/service"
  14. tmsync "github.com/tendermint/tendermint/libs/sync"
  15. bcproto "github.com/tendermint/tendermint/proto/tendermint/blocksync"
  16. "github.com/tendermint/tendermint/types"
  17. )
  18. var (
  19. _ service.Service = (*Reactor)(nil)
  20. // ChannelShims contains a map of ChannelDescriptorShim objects, where each
  21. // object wraps a reference to a legacy p2p ChannelDescriptor and the corresponding
  22. // p2p proto.Message the new p2p Channel is responsible for handling.
  23. //
  24. //
  25. // TODO: Remove once p2p refactor is complete.
  26. // ref: https://github.com/tendermint/tendermint/issues/5670
  27. ChannelShims = map[p2p.ChannelID]*p2p.ChannelDescriptorShim{
  28. BlockSyncChannel: {
  29. MsgType: new(bcproto.Message),
  30. Descriptor: &p2p.ChannelDescriptor{
  31. ID: byte(BlockSyncChannel),
  32. Priority: 5,
  33. SendQueueCapacity: 1000,
  34. RecvBufferCapacity: 1024,
  35. RecvMessageCapacity: blocksync.MaxMsgSize,
  36. MaxSendBytes: 100,
  37. },
  38. },
  39. }
  40. )
  41. const (
  42. // BlockSyncChannel is a channel for blocks and status updates
  43. BlockSyncChannel = p2p.ChannelID(0x40)
  44. trySyncIntervalMS = 10
  45. // ask for best height every 10s
  46. statusUpdateIntervalSeconds = 10
  47. // check if we should switch to consensus reactor
  48. switchToConsensusIntervalSeconds = 1
  49. // switch to consensus after this duration of inactivity
  50. syncTimeout = 60 * time.Second
  51. )
  52. type consensusReactor interface {
  53. // For when we switch from block sync reactor to the consensus
  54. // machine.
  55. SwitchToConsensus(state sm.State, skipWAL bool)
  56. }
  57. type peerError struct {
  58. err error
  59. peerID types.NodeID
  60. }
  61. func (e peerError) Error() string {
  62. return fmt.Sprintf("error with peer %v: %s", e.peerID, e.err.Error())
  63. }
  64. // Reactor handles long-term catchup syncing.
  65. type Reactor struct {
  66. service.BaseService
  67. // immutable
  68. initialState sm.State
  69. blockExec *sm.BlockExecutor
  70. store *store.BlockStore
  71. pool *BlockPool
  72. consReactor consensusReactor
  73. blockSync *tmsync.AtomicBool
  74. blockSyncCh *p2p.Channel
  75. // blockSyncOutBridgeCh defines a channel that acts as a bridge between sending Envelope
  76. // messages that the reactor will consume in processBlockSyncCh and receiving messages
  77. // from the peer updates channel and other goroutines. We do this instead of directly
  78. // sending on blockSyncCh.Out to avoid race conditions in the case where other goroutines
  79. // send Envelopes directly to the to blockSyncCh.Out channel, since processBlockSyncCh
  80. // may close the blockSyncCh.Out channel at the same time that other goroutines send to
  81. // blockSyncCh.Out.
  82. blockSyncOutBridgeCh chan p2p.Envelope
  83. peerUpdates *p2p.PeerUpdates
  84. closeCh chan struct{}
  85. requestsCh <-chan BlockRequest
  86. errorsCh <-chan peerError
  87. // poolWG is used to synchronize the graceful shutdown of the poolRoutine and
  88. // requestRoutine spawned goroutines when stopping the reactor and before
  89. // stopping the p2p Channel(s).
  90. poolWG sync.WaitGroup
  91. metrics *consensus.Metrics
  92. syncStartTime time.Time
  93. }
  94. // NewReactor returns new reactor instance.
  95. func NewReactor(
  96. logger log.Logger,
  97. state sm.State,
  98. blockExec *sm.BlockExecutor,
  99. store *store.BlockStore,
  100. consReactor consensusReactor,
  101. blockSyncCh *p2p.Channel,
  102. peerUpdates *p2p.PeerUpdates,
  103. blockSync bool,
  104. metrics *consensus.Metrics,
  105. ) (*Reactor, error) {
  106. if state.LastBlockHeight != store.Height() {
  107. return nil, fmt.Errorf("state (%v) and store (%v) height mismatch", state.LastBlockHeight, store.Height())
  108. }
  109. startHeight := store.Height() + 1
  110. if startHeight == 1 {
  111. startHeight = state.InitialHeight
  112. }
  113. requestsCh := make(chan BlockRequest, maxTotalRequesters)
  114. errorsCh := make(chan peerError, maxPeerErrBuffer) // NOTE: The capacity should be larger than the peer count.
  115. r := &Reactor{
  116. initialState: state,
  117. blockExec: blockExec,
  118. store: store,
  119. pool: NewBlockPool(startHeight, requestsCh, errorsCh),
  120. consReactor: consReactor,
  121. blockSync: tmsync.NewBool(blockSync),
  122. requestsCh: requestsCh,
  123. errorsCh: errorsCh,
  124. blockSyncCh: blockSyncCh,
  125. blockSyncOutBridgeCh: make(chan p2p.Envelope),
  126. peerUpdates: peerUpdates,
  127. closeCh: make(chan struct{}),
  128. metrics: metrics,
  129. syncStartTime: time.Time{},
  130. }
  131. r.BaseService = *service.NewBaseService(logger, "BlockSync", r)
  132. return r, nil
  133. }
  134. // OnStart starts separate go routines for each p2p Channel and listens for
  135. // envelopes on each. In addition, it also listens for peer updates and handles
  136. // messages on that p2p channel accordingly. The caller must be sure to execute
  137. // OnStop to ensure the outbound p2p Channels are closed.
  138. //
  139. // If blockSync is enabled, we also start the pool and the pool processing
  140. // goroutine. If the pool fails to start, an error is returned.
  141. func (r *Reactor) OnStart() error {
  142. if r.blockSync.IsSet() {
  143. if err := r.pool.Start(); err != nil {
  144. return err
  145. }
  146. r.poolWG.Add(1)
  147. go r.requestRoutine()
  148. r.poolWG.Add(1)
  149. go r.poolRoutine(false)
  150. }
  151. go r.processBlockSyncCh()
  152. go r.processPeerUpdates()
  153. return nil
  154. }
  155. // OnStop stops the reactor by signaling to all spawned goroutines to exit and
  156. // blocking until they all exit.
  157. func (r *Reactor) OnStop() {
  158. if r.blockSync.IsSet() {
  159. if err := r.pool.Stop(); err != nil {
  160. r.Logger.Error("failed to stop pool", "err", err)
  161. }
  162. }
  163. // wait for the poolRoutine and requestRoutine goroutines to gracefully exit
  164. r.poolWG.Wait()
  165. // Close closeCh to signal to all spawned goroutines to gracefully exit. All
  166. // p2p Channels should execute Close().
  167. close(r.closeCh)
  168. // Wait for all p2p Channels to be closed before returning. This ensures we
  169. // can easily reason about synchronization of all p2p Channels and ensure no
  170. // panics will occur.
  171. <-r.blockSyncCh.Done()
  172. <-r.peerUpdates.Done()
  173. }
  174. // respondToPeer loads a block and sends it to the requesting peer, if we have it.
  175. // Otherwise, we'll respond saying we do not have it.
  176. func (r *Reactor) respondToPeer(msg *bcproto.BlockRequest, peerID types.NodeID) {
  177. block := r.store.LoadBlock(msg.Height)
  178. if block != nil {
  179. blockProto, err := block.ToProto()
  180. if err != nil {
  181. r.Logger.Error("failed to convert msg to protobuf", "err", err)
  182. return
  183. }
  184. r.blockSyncCh.Out <- p2p.Envelope{
  185. To: peerID,
  186. Message: &bcproto.BlockResponse{Block: blockProto},
  187. }
  188. return
  189. }
  190. r.Logger.Info("peer requesting a block we do not have", "peer", peerID, "height", msg.Height)
  191. r.blockSyncCh.Out <- p2p.Envelope{
  192. To: peerID,
  193. Message: &bcproto.NoBlockResponse{Height: msg.Height},
  194. }
  195. }
  196. // handleBlockSyncMessage handles envelopes sent from peers on the
  197. // BlockSyncChannel. It returns an error only if the Envelope.Message is unknown
  198. // for this channel. This should never be called outside of handleMessage.
  199. func (r *Reactor) handleBlockSyncMessage(envelope p2p.Envelope) error {
  200. logger := r.Logger.With("peer", envelope.From)
  201. switch msg := envelope.Message.(type) {
  202. case *bcproto.BlockRequest:
  203. r.respondToPeer(msg, envelope.From)
  204. case *bcproto.BlockResponse:
  205. block, err := types.BlockFromProto(msg.Block)
  206. if err != nil {
  207. logger.Error("failed to convert block from proto", "err", err)
  208. return err
  209. }
  210. r.pool.AddBlock(envelope.From, block, block.Size())
  211. case *bcproto.StatusRequest:
  212. r.blockSyncCh.Out <- p2p.Envelope{
  213. To: envelope.From,
  214. Message: &bcproto.StatusResponse{
  215. Height: r.store.Height(),
  216. Base: r.store.Base(),
  217. },
  218. }
  219. case *bcproto.StatusResponse:
  220. r.pool.SetPeerRange(envelope.From, msg.Base, msg.Height)
  221. case *bcproto.NoBlockResponse:
  222. logger.Debug("peer does not have the requested block", "height", msg.Height)
  223. default:
  224. return fmt.Errorf("received unknown message: %T", msg)
  225. }
  226. return nil
  227. }
  228. // handleMessage handles an Envelope sent from a peer on a specific p2p Channel.
  229. // It will handle errors and any possible panics gracefully. A caller can handle
  230. // any error returned by sending a PeerError on the respective channel.
  231. func (r *Reactor) handleMessage(chID p2p.ChannelID, envelope p2p.Envelope) (err error) {
  232. defer func() {
  233. if e := recover(); e != nil {
  234. err = fmt.Errorf("panic in processing message: %v", e)
  235. r.Logger.Error(
  236. "recovering from processing message panic",
  237. "err", err,
  238. "stack", string(debug.Stack()),
  239. )
  240. }
  241. }()
  242. r.Logger.Debug("received message", "message", envelope.Message, "peer", envelope.From)
  243. switch chID {
  244. case BlockSyncChannel:
  245. err = r.handleBlockSyncMessage(envelope)
  246. default:
  247. err = fmt.Errorf("unknown channel ID (%d) for envelope (%v)", chID, envelope)
  248. }
  249. return err
  250. }
  251. // processBlockSyncCh initiates a blocking process where we listen for and handle
  252. // envelopes on the BlockSyncChannel and blockSyncOutBridgeCh. Any error encountered during
  253. // message execution will result in a PeerError being sent on the BlockSyncChannel.
  254. // When the reactor is stopped, we will catch the signal and close the p2p Channel
  255. // gracefully.
  256. func (r *Reactor) processBlockSyncCh() {
  257. defer r.blockSyncCh.Close()
  258. for {
  259. select {
  260. case envelope := <-r.blockSyncCh.In:
  261. if err := r.handleMessage(r.blockSyncCh.ID, envelope); err != nil {
  262. r.Logger.Error("failed to process message", "ch_id", r.blockSyncCh.ID, "envelope", envelope, "err", err)
  263. r.blockSyncCh.Error <- p2p.PeerError{
  264. NodeID: envelope.From,
  265. Err: err,
  266. }
  267. }
  268. case envelope := <-r.blockSyncOutBridgeCh:
  269. r.blockSyncCh.Out <- envelope
  270. case <-r.closeCh:
  271. r.Logger.Debug("stopped listening on block sync channel; closing...")
  272. return
  273. }
  274. }
  275. }
  276. // processPeerUpdate processes a PeerUpdate.
  277. func (r *Reactor) processPeerUpdate(peerUpdate p2p.PeerUpdate) {
  278. r.Logger.Debug("received peer update", "peer", peerUpdate.NodeID, "status", peerUpdate.Status)
  279. // XXX: Pool#RedoRequest can sometimes give us an empty peer.
  280. if len(peerUpdate.NodeID) == 0 {
  281. return
  282. }
  283. switch peerUpdate.Status {
  284. case p2p.PeerStatusUp:
  285. // send a status update the newly added peer
  286. r.blockSyncOutBridgeCh <- p2p.Envelope{
  287. To: peerUpdate.NodeID,
  288. Message: &bcproto.StatusResponse{
  289. Base: r.store.Base(),
  290. Height: r.store.Height(),
  291. },
  292. }
  293. case p2p.PeerStatusDown:
  294. r.pool.RemovePeer(peerUpdate.NodeID)
  295. }
  296. }
  297. // processPeerUpdates initiates a blocking process where we listen for and handle
  298. // PeerUpdate messages. When the reactor is stopped, we will catch the signal and
  299. // close the p2p PeerUpdatesCh gracefully.
  300. func (r *Reactor) processPeerUpdates() {
  301. defer r.peerUpdates.Close()
  302. for {
  303. select {
  304. case peerUpdate := <-r.peerUpdates.Updates():
  305. r.processPeerUpdate(peerUpdate)
  306. case <-r.closeCh:
  307. r.Logger.Debug("stopped listening on peer updates channel; closing...")
  308. return
  309. }
  310. }
  311. }
  312. // SwitchToBlockSync is called by the state sync reactor when switching to fast
  313. // sync.
  314. func (r *Reactor) SwitchToBlockSync(state sm.State) error {
  315. r.blockSync.Set()
  316. r.initialState = state
  317. r.pool.height = state.LastBlockHeight + 1
  318. if err := r.pool.Start(); err != nil {
  319. return err
  320. }
  321. r.syncStartTime = time.Now()
  322. r.poolWG.Add(1)
  323. go r.requestRoutine()
  324. r.poolWG.Add(1)
  325. go r.poolRoutine(true)
  326. return nil
  327. }
  328. func (r *Reactor) requestRoutine() {
  329. statusUpdateTicker := time.NewTicker(statusUpdateIntervalSeconds * time.Second)
  330. defer statusUpdateTicker.Stop()
  331. defer r.poolWG.Done()
  332. for {
  333. select {
  334. case <-r.closeCh:
  335. return
  336. case <-r.pool.Quit():
  337. return
  338. case request := <-r.requestsCh:
  339. r.blockSyncOutBridgeCh <- p2p.Envelope{
  340. To: request.PeerID,
  341. Message: &bcproto.BlockRequest{Height: request.Height},
  342. }
  343. case pErr := <-r.errorsCh:
  344. r.blockSyncCh.Error <- p2p.PeerError{
  345. NodeID: pErr.peerID,
  346. Err: pErr.err,
  347. }
  348. case <-statusUpdateTicker.C:
  349. r.poolWG.Add(1)
  350. go func() {
  351. defer r.poolWG.Done()
  352. r.blockSyncOutBridgeCh <- p2p.Envelope{
  353. Broadcast: true,
  354. Message: &bcproto.StatusRequest{},
  355. }
  356. }()
  357. }
  358. }
  359. }
  360. // poolRoutine handles messages from the poolReactor telling the reactor what to
  361. // do.
  362. //
  363. // NOTE: Don't sleep in the FOR_LOOP or otherwise slow it down!
  364. func (r *Reactor) poolRoutine(stateSynced bool) {
  365. var (
  366. trySyncTicker = time.NewTicker(trySyncIntervalMS * time.Millisecond)
  367. switchToConsensusTicker = time.NewTicker(switchToConsensusIntervalSeconds * time.Second)
  368. blocksSynced = uint64(0)
  369. chainID = r.initialState.ChainID
  370. state = r.initialState
  371. lastHundred = time.Now()
  372. lastRate = 0.0
  373. didProcessCh = make(chan struct{}, 1)
  374. )
  375. defer trySyncTicker.Stop()
  376. defer switchToConsensusTicker.Stop()
  377. defer r.poolWG.Done()
  378. FOR_LOOP:
  379. for {
  380. select {
  381. case <-switchToConsensusTicker.C:
  382. var (
  383. height, numPending, lenRequesters = r.pool.GetStatus()
  384. lastAdvance = r.pool.LastAdvance()
  385. )
  386. r.Logger.Debug(
  387. "consensus ticker",
  388. "num_pending", numPending,
  389. "total", lenRequesters,
  390. "height", height,
  391. )
  392. switch {
  393. case r.pool.IsCaughtUp():
  394. r.Logger.Info("switching to consensus reactor", "height", height)
  395. case time.Since(lastAdvance) > syncTimeout:
  396. r.Logger.Error("no progress since last advance", "last_advance", lastAdvance)
  397. default:
  398. r.Logger.Info(
  399. "not caught up yet",
  400. "height", height,
  401. "max_peer_height", r.pool.MaxPeerHeight(),
  402. "timeout_in", syncTimeout-time.Since(lastAdvance),
  403. )
  404. continue
  405. }
  406. if err := r.pool.Stop(); err != nil {
  407. r.Logger.Error("failed to stop pool", "err", err)
  408. }
  409. r.blockSync.UnSet()
  410. if r.consReactor != nil {
  411. r.consReactor.SwitchToConsensus(state, blocksSynced > 0 || stateSynced)
  412. }
  413. break FOR_LOOP
  414. case <-trySyncTicker.C:
  415. select {
  416. case didProcessCh <- struct{}{}:
  417. default:
  418. }
  419. case <-didProcessCh:
  420. // NOTE: It is a subtle mistake to process more than a single block at a
  421. // time (e.g. 10) here, because we only send one BlockRequest per loop
  422. // iteration. The ratio mismatch can result in starving of blocks, i.e. a
  423. // sudden burst of requests and responses, and repeat. Consequently, it is
  424. // better to split these routines rather than coupling them as it is
  425. // written here.
  426. //
  427. // TODO: Uncouple from request routine.
  428. // see if there are any blocks to sync
  429. first, second := r.pool.PeekTwoBlocks()
  430. if first == nil || second == nil {
  431. // we need both to sync the first block
  432. continue FOR_LOOP
  433. } else {
  434. // try again quickly next loop
  435. didProcessCh <- struct{}{}
  436. }
  437. var (
  438. firstParts = first.MakePartSet(types.BlockPartSizeBytes)
  439. firstPartSetHeader = firstParts.Header()
  440. firstID = types.BlockID{Hash: first.Hash(), PartSetHeader: firstPartSetHeader}
  441. )
  442. // Finally, verify the first block using the second's commit.
  443. //
  444. // NOTE: We can probably make this more efficient, but note that calling
  445. // first.Hash() doesn't verify the tx contents, so MakePartSet() is
  446. // currently necessary.
  447. err := state.Validators.VerifyCommitLight(chainID, firstID, first.Height, second.LastCommit)
  448. if err != nil {
  449. err = fmt.Errorf("invalid last commit: %w", err)
  450. r.Logger.Error(
  451. err.Error(),
  452. "last_commit", second.LastCommit,
  453. "block_id", firstID,
  454. "height", first.Height,
  455. )
  456. // NOTE: We've already removed the peer's request, but we still need
  457. // to clean up the rest.
  458. peerID := r.pool.RedoRequest(first.Height)
  459. r.blockSyncCh.Error <- p2p.PeerError{
  460. NodeID: peerID,
  461. Err: err,
  462. }
  463. peerID2 := r.pool.RedoRequest(second.Height)
  464. if peerID2 != peerID {
  465. r.blockSyncCh.Error <- p2p.PeerError{
  466. NodeID: peerID2,
  467. Err: err,
  468. }
  469. }
  470. continue FOR_LOOP
  471. } else {
  472. r.pool.PopRequest()
  473. // TODO: batch saves so we do not persist to disk every block
  474. r.store.SaveBlock(first, firstParts, second.LastCommit)
  475. var err error
  476. // TODO: Same thing for app - but we would need a way to get the hash
  477. // without persisting the state.
  478. state, err = r.blockExec.ApplyBlock(state, firstID, first)
  479. if err != nil {
  480. // TODO: This is bad, are we zombie?
  481. panic(fmt.Sprintf("failed to process committed block (%d:%X): %v", first.Height, first.Hash(), err))
  482. }
  483. r.metrics.RecordConsMetrics(first)
  484. blocksSynced++
  485. if blocksSynced%100 == 0 {
  486. lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
  487. r.Logger.Info(
  488. "block sync rate",
  489. "height", r.pool.height,
  490. "max_peer_height", r.pool.MaxPeerHeight(),
  491. "blocks/s", lastRate,
  492. )
  493. lastHundred = time.Now()
  494. }
  495. }
  496. continue FOR_LOOP
  497. case <-r.closeCh:
  498. break FOR_LOOP
  499. case <-r.pool.Quit():
  500. break FOR_LOOP
  501. }
  502. }
  503. }
  504. func (r *Reactor) GetMaxPeerBlockHeight() int64 {
  505. return r.pool.MaxPeerHeight()
  506. }
  507. func (r *Reactor) GetTotalSyncedTime() time.Duration {
  508. if !r.blockSync.IsSet() || r.syncStartTime.IsZero() {
  509. return time.Duration(0)
  510. }
  511. return time.Since(r.syncStartTime)
  512. }
  513. func (r *Reactor) GetRemainingSyncTime() time.Duration {
  514. if !r.blockSync.IsSet() {
  515. return time.Duration(0)
  516. }
  517. targetSyncs := r.pool.targetSyncBlocks()
  518. currentSyncs := r.store.Height() - r.pool.startHeight + 1
  519. lastSyncRate := r.pool.getLastSyncRate()
  520. if currentSyncs < 0 || lastSyncRate < 0.001 {
  521. return time.Duration(0)
  522. }
  523. remain := float64(targetSyncs-currentSyncs) / lastSyncRate
  524. return time.Duration(int64(remain * float64(time.Second)))
  525. }