You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

671 lines
20 KiB

blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
  1. package v1
  2. import (
  3. "testing"
  4. "time"
  5. "github.com/stretchr/testify/assert"
  6. "github.com/tendermint/tendermint/libs/log"
  7. "github.com/tendermint/tendermint/p2p"
  8. "github.com/tendermint/tendermint/types"
  9. )
  10. type testPeer struct {
  11. id p2p.ID
  12. height int64
  13. }
  14. type testBcR struct {
  15. logger log.Logger
  16. }
  17. type testValues struct {
  18. numRequestsSent int
  19. }
  20. var testResults testValues
  21. func resetPoolTestResults() {
  22. testResults.numRequestsSent = 0
  23. }
  24. func (testR *testBcR) sendPeerError(err error, peerID p2p.ID) {
  25. }
  26. func (testR *testBcR) sendStatusRequest() {
  27. }
  28. func (testR *testBcR) sendBlockRequest(peerID p2p.ID, height int64) error {
  29. testResults.numRequestsSent++
  30. return nil
  31. }
  32. func (testR *testBcR) resetStateTimer(name string, timer **time.Timer, timeout time.Duration) {
  33. }
  34. func (testR *testBcR) switchToConsensus() {
  35. }
  36. func newTestBcR() *testBcR {
  37. testBcR := &testBcR{logger: log.TestingLogger()}
  38. return testBcR
  39. }
  40. type tPBlocks struct {
  41. id p2p.ID
  42. create bool
  43. }
  44. // Makes a block pool with specified current height, list of peers, block requests and block responses
  45. func makeBlockPool(bcr *testBcR, height int64, peers []BpPeer, blocks map[int64]tPBlocks) *BlockPool {
  46. bPool := NewBlockPool(height, bcr)
  47. bPool.SetLogger(bcr.logger)
  48. txs := []types.Tx{types.Tx("foo"), types.Tx("bar")}
  49. var maxH int64
  50. for _, p := range peers {
  51. if p.Height > maxH {
  52. maxH = p.Height
  53. }
  54. bPool.peers[p.ID] = NewBpPeer(p.ID, p.Height, bcr.sendPeerError, nil)
  55. bPool.peers[p.ID].SetLogger(bcr.logger)
  56. }
  57. bPool.MaxPeerHeight = maxH
  58. for h, p := range blocks {
  59. bPool.blocks[h] = p.id
  60. bPool.peers[p.id].RequestSent(h)
  61. if p.create {
  62. // simulate that a block at height h has been received
  63. _ = bPool.peers[p.id].AddBlock(types.MakeBlock(h, txs, nil, nil), 100)
  64. }
  65. }
  66. return bPool
  67. }
  68. func assertPeerSetsEquivalent(t *testing.T, set1 map[p2p.ID]*BpPeer, set2 map[p2p.ID]*BpPeer) {
  69. assert.Equal(t, len(set1), len(set2))
  70. for peerID, peer1 := range set1 {
  71. peer2 := set2[peerID]
  72. assert.NotNil(t, peer2)
  73. assert.Equal(t, peer1.NumPendingBlockRequests, peer2.NumPendingBlockRequests)
  74. assert.Equal(t, peer1.Height, peer2.Height)
  75. assert.Equal(t, len(peer1.blocks), len(peer2.blocks))
  76. for h, block1 := range peer1.blocks {
  77. block2 := peer2.blocks[h]
  78. // block1 and block2 could be nil if a request was made but no block was received
  79. assert.Equal(t, block1, block2)
  80. }
  81. }
  82. }
  83. func assertBlockPoolEquivalent(t *testing.T, poolWanted, pool *BlockPool) {
  84. assert.Equal(t, poolWanted.blocks, pool.blocks)
  85. assertPeerSetsEquivalent(t, poolWanted.peers, pool.peers)
  86. assert.Equal(t, poolWanted.MaxPeerHeight, pool.MaxPeerHeight)
  87. assert.Equal(t, poolWanted.Height, pool.Height)
  88. }
  89. func TestBlockPoolUpdatePeer(t *testing.T) {
  90. testBcR := newTestBcR()
  91. tests := []struct {
  92. name string
  93. pool *BlockPool
  94. args testPeer
  95. poolWanted *BlockPool
  96. errWanted error
  97. }{
  98. {
  99. name: "add a first short peer",
  100. pool: makeBlockPool(testBcR, 100, []BpPeer{}, map[int64]tPBlocks{}),
  101. args: testPeer{"P1", 50},
  102. errWanted: errPeerTooShort,
  103. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{}, map[int64]tPBlocks{}),
  104. },
  105. {
  106. name: "add a first good peer",
  107. pool: makeBlockPool(testBcR, 100, []BpPeer{}, map[int64]tPBlocks{}),
  108. args: testPeer{"P1", 101},
  109. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 101}}, map[int64]tPBlocks{}),
  110. },
  111. {
  112. name: "increase the height of P1 from 120 to 123",
  113. pool: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 120}}, map[int64]tPBlocks{}),
  114. args: testPeer{"P1", 123},
  115. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 123}}, map[int64]tPBlocks{}),
  116. },
  117. {
  118. name: "decrease the height of P1 from 120 to 110",
  119. pool: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 120}}, map[int64]tPBlocks{}),
  120. args: testPeer{"P1", 110},
  121. errWanted: errPeerLowersItsHeight,
  122. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{}, map[int64]tPBlocks{}),
  123. },
  124. {
  125. name: "decrease the height of P1 from 105 to 102 with blocks",
  126. pool: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 105}},
  127. map[int64]tPBlocks{
  128. 100: {"P1", true}, 101: {"P1", true}, 102: {"P1", true}}),
  129. args: testPeer{"P1", 102},
  130. errWanted: errPeerLowersItsHeight,
  131. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{},
  132. map[int64]tPBlocks{}),
  133. },
  134. }
  135. for _, tt := range tests {
  136. tt := tt
  137. t.Run(tt.name, func(t *testing.T) {
  138. pool := tt.pool
  139. err := pool.UpdatePeer(tt.args.id, tt.args.height)
  140. assert.Equal(t, tt.errWanted, err)
  141. assert.Equal(t, tt.poolWanted.blocks, tt.pool.blocks)
  142. assertPeerSetsEquivalent(t, tt.poolWanted.peers, tt.pool.peers)
  143. assert.Equal(t, tt.poolWanted.MaxPeerHeight, tt.pool.MaxPeerHeight)
  144. })
  145. }
  146. }
  147. func TestBlockPoolRemovePeer(t *testing.T) {
  148. testBcR := newTestBcR()
  149. type args struct {
  150. peerID p2p.ID
  151. err error
  152. }
  153. tests := []struct {
  154. name string
  155. pool *BlockPool
  156. args args
  157. poolWanted *BlockPool
  158. }{
  159. {
  160. name: "attempt to delete non-existing peer",
  161. pool: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 120}}, map[int64]tPBlocks{}),
  162. args: args{"P99", nil},
  163. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 120}}, map[int64]tPBlocks{}),
  164. },
  165. {
  166. name: "delete the only peer without blocks",
  167. pool: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 120}}, map[int64]tPBlocks{}),
  168. args: args{"P1", nil},
  169. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{}, map[int64]tPBlocks{}),
  170. },
  171. {
  172. name: "delete the shortest of two peers without blocks",
  173. pool: makeBlockPool(
  174. testBcR,
  175. 100,
  176. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 120}},
  177. map[int64]tPBlocks{}),
  178. args: args{"P1", nil},
  179. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P2", Height: 120}}, map[int64]tPBlocks{}),
  180. },
  181. {
  182. name: "delete the tallest of two peers without blocks",
  183. pool: makeBlockPool(
  184. testBcR,
  185. 100,
  186. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 120}},
  187. map[int64]tPBlocks{}),
  188. args: args{"P2", nil},
  189. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 100}}, map[int64]tPBlocks{}),
  190. },
  191. {
  192. name: "delete the only peer with block requests sent and blocks received",
  193. pool: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 120}},
  194. map[int64]tPBlocks{100: {"P1", true}, 101: {"P1", false}}),
  195. args: args{"P1", nil},
  196. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{}, map[int64]tPBlocks{}),
  197. },
  198. {
  199. name: "delete the shortest of two peers with block requests sent and blocks received",
  200. pool: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 120}, {ID: "P2", Height: 200}},
  201. map[int64]tPBlocks{100: {"P1", true}, 101: {"P1", false}}),
  202. args: args{"P1", nil},
  203. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P2", Height: 200}}, map[int64]tPBlocks{}),
  204. },
  205. {
  206. name: "delete the tallest of two peers with block requests sent and blocks received",
  207. pool: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 120}, {ID: "P2", Height: 110}},
  208. map[int64]tPBlocks{100: {"P1", true}, 101: {"P1", false}}),
  209. args: args{"P1", nil},
  210. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P2", Height: 110}}, map[int64]tPBlocks{}),
  211. },
  212. }
  213. for _, tt := range tests {
  214. tt := tt
  215. t.Run(tt.name, func(t *testing.T) {
  216. tt.pool.RemovePeer(tt.args.peerID, tt.args.err)
  217. assertBlockPoolEquivalent(t, tt.poolWanted, tt.pool)
  218. })
  219. }
  220. }
  221. func TestBlockPoolRemoveShortPeers(t *testing.T) {
  222. testBcR := newTestBcR()
  223. tests := []struct {
  224. name string
  225. pool *BlockPool
  226. poolWanted *BlockPool
  227. }{
  228. {
  229. name: "no short peers",
  230. pool: makeBlockPool(testBcR, 100,
  231. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 110}, {ID: "P3", Height: 120}}, map[int64]tPBlocks{}),
  232. poolWanted: makeBlockPool(testBcR, 100,
  233. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 110}, {ID: "P3", Height: 120}}, map[int64]tPBlocks{}),
  234. },
  235. {
  236. name: "one short peer",
  237. pool: makeBlockPool(testBcR, 100,
  238. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 90}, {ID: "P3", Height: 120}}, map[int64]tPBlocks{}),
  239. poolWanted: makeBlockPool(testBcR, 100,
  240. []BpPeer{{ID: "P1", Height: 100}, {ID: "P3", Height: 120}}, map[int64]tPBlocks{}),
  241. },
  242. {
  243. name: "all short peers",
  244. pool: makeBlockPool(testBcR, 100,
  245. []BpPeer{{ID: "P1", Height: 90}, {ID: "P2", Height: 91}, {ID: "P3", Height: 92}}, map[int64]tPBlocks{}),
  246. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{}, map[int64]tPBlocks{}),
  247. },
  248. }
  249. for _, tt := range tests {
  250. tt := tt
  251. t.Run(tt.name, func(t *testing.T) {
  252. pool := tt.pool
  253. pool.removeShortPeers()
  254. assertBlockPoolEquivalent(t, tt.poolWanted, tt.pool)
  255. })
  256. }
  257. }
  258. func TestBlockPoolSendRequestBatch(t *testing.T) {
  259. type testPeerResult struct {
  260. id p2p.ID
  261. numPendingBlockRequests int
  262. }
  263. testBcR := newTestBcR()
  264. tests := []struct {
  265. name string
  266. pool *BlockPool
  267. maxRequestsPerPeer int
  268. expRequests map[int64]bool
  269. expPeerResults []testPeerResult
  270. expnumPendingBlockRequests int
  271. }{
  272. {
  273. name: "one peer - send up to maxRequestsPerPeer block requests",
  274. pool: makeBlockPool(testBcR, 10, []BpPeer{{ID: "P1", Height: 100}}, map[int64]tPBlocks{}),
  275. maxRequestsPerPeer: 2,
  276. expRequests: map[int64]bool{10: true, 11: true},
  277. expPeerResults: []testPeerResult{{id: "P1", numPendingBlockRequests: 2}},
  278. expnumPendingBlockRequests: 2,
  279. },
  280. {
  281. name: "n peers - send n*maxRequestsPerPeer block requests",
  282. pool: makeBlockPool(
  283. testBcR,
  284. 10,
  285. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 100}},
  286. map[int64]tPBlocks{}),
  287. maxRequestsPerPeer: 2,
  288. expRequests: map[int64]bool{10: true, 11: true},
  289. expPeerResults: []testPeerResult{
  290. {id: "P1", numPendingBlockRequests: 2},
  291. {id: "P2", numPendingBlockRequests: 2}},
  292. expnumPendingBlockRequests: 4,
  293. },
  294. }
  295. for _, tt := range tests {
  296. tt := tt
  297. t.Run(tt.name, func(t *testing.T) {
  298. resetPoolTestResults()
  299. var pool = tt.pool
  300. maxRequestsPerPeer = tt.maxRequestsPerPeer
  301. pool.MakeNextRequests(10)
  302. assert.Equal(t, testResults.numRequestsSent, maxRequestsPerPeer*len(pool.peers))
  303. for _, tPeer := range tt.expPeerResults {
  304. var peer = pool.peers[tPeer.id]
  305. assert.NotNil(t, peer)
  306. assert.Equal(t, tPeer.numPendingBlockRequests, peer.NumPendingBlockRequests)
  307. }
  308. assert.Equal(t, testResults.numRequestsSent, maxRequestsPerPeer*len(pool.peers))
  309. })
  310. }
  311. }
  312. func TestBlockPoolAddBlock(t *testing.T) {
  313. testBcR := newTestBcR()
  314. txs := []types.Tx{types.Tx("foo"), types.Tx("bar")}
  315. type args struct {
  316. peerID p2p.ID
  317. block *types.Block
  318. blockSize int
  319. }
  320. tests := []struct {
  321. name string
  322. pool *BlockPool
  323. args args
  324. poolWanted *BlockPool
  325. errWanted error
  326. }{
  327. {name: "block from unknown peer",
  328. pool: makeBlockPool(testBcR, 10, []BpPeer{{ID: "P1", Height: 100}}, map[int64]tPBlocks{}),
  329. args: args{
  330. peerID: "P2",
  331. block: types.MakeBlock(int64(10), txs, nil, nil),
  332. blockSize: 100,
  333. },
  334. poolWanted: makeBlockPool(testBcR, 10, []BpPeer{{ID: "P1", Height: 100}}, map[int64]tPBlocks{}),
  335. errWanted: errBadDataFromPeer,
  336. },
  337. {name: "unexpected block 11 from known peer - waiting for 10",
  338. pool: makeBlockPool(testBcR, 10,
  339. []BpPeer{{ID: "P1", Height: 100}},
  340. map[int64]tPBlocks{10: {"P1", false}}),
  341. args: args{
  342. peerID: "P1",
  343. block: types.MakeBlock(int64(11), txs, nil, nil),
  344. blockSize: 100,
  345. },
  346. poolWanted: makeBlockPool(testBcR, 10,
  347. []BpPeer{{ID: "P1", Height: 100}},
  348. map[int64]tPBlocks{10: {"P1", false}}),
  349. errWanted: errMissingBlock,
  350. },
  351. {name: "unexpected block 10 from known peer - already have 10",
  352. pool: makeBlockPool(testBcR, 10,
  353. []BpPeer{{ID: "P1", Height: 100}},
  354. map[int64]tPBlocks{10: {"P1", true}, 11: {"P1", false}}),
  355. args: args{
  356. peerID: "P1",
  357. block: types.MakeBlock(int64(10), txs, nil, nil),
  358. blockSize: 100,
  359. },
  360. poolWanted: makeBlockPool(testBcR, 10,
  361. []BpPeer{{ID: "P1", Height: 100}},
  362. map[int64]tPBlocks{10: {"P1", true}, 11: {"P1", false}}),
  363. errWanted: errDuplicateBlock,
  364. },
  365. {name: "unexpected block 10 from known peer P2 - expected 10 to come from P1",
  366. pool: makeBlockPool(testBcR, 10,
  367. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 100}},
  368. map[int64]tPBlocks{10: {"P1", false}}),
  369. args: args{
  370. peerID: "P2",
  371. block: types.MakeBlock(int64(10), txs, nil, nil),
  372. blockSize: 100,
  373. },
  374. poolWanted: makeBlockPool(testBcR, 10,
  375. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 100}},
  376. map[int64]tPBlocks{10: {"P1", false}}),
  377. errWanted: errBadDataFromPeer,
  378. },
  379. {name: "expected block from known peer",
  380. pool: makeBlockPool(testBcR, 10,
  381. []BpPeer{{ID: "P1", Height: 100}},
  382. map[int64]tPBlocks{10: {"P1", false}}),
  383. args: args{
  384. peerID: "P1",
  385. block: types.MakeBlock(int64(10), txs, nil, nil),
  386. blockSize: 100,
  387. },
  388. poolWanted: makeBlockPool(testBcR, 10,
  389. []BpPeer{{ID: "P1", Height: 100}},
  390. map[int64]tPBlocks{10: {"P1", true}}),
  391. errWanted: nil,
  392. },
  393. }
  394. for _, tt := range tests {
  395. tt := tt
  396. t.Run(tt.name, func(t *testing.T) {
  397. err := tt.pool.AddBlock(tt.args.peerID, tt.args.block, tt.args.blockSize)
  398. assert.Equal(t, tt.errWanted, err)
  399. assertBlockPoolEquivalent(t, tt.poolWanted, tt.pool)
  400. })
  401. }
  402. }
  403. func TestBlockPoolFirstTwoBlocksAndPeers(t *testing.T) {
  404. testBcR := newTestBcR()
  405. tests := []struct {
  406. name string
  407. pool *BlockPool
  408. firstWanted int64
  409. secondWanted int64
  410. errWanted error
  411. }{
  412. {
  413. name: "both blocks missing",
  414. pool: makeBlockPool(testBcR, 10,
  415. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 100}},
  416. map[int64]tPBlocks{15: {"P1", true}, 16: {"P2", true}}),
  417. errWanted: errMissingBlock,
  418. },
  419. {
  420. name: "second block missing",
  421. pool: makeBlockPool(testBcR, 15,
  422. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 100}},
  423. map[int64]tPBlocks{15: {"P1", true}, 18: {"P2", true}}),
  424. firstWanted: 15,
  425. errWanted: errMissingBlock,
  426. },
  427. {
  428. name: "first block missing",
  429. pool: makeBlockPool(testBcR, 15,
  430. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 100}},
  431. map[int64]tPBlocks{16: {"P2", true}, 18: {"P2", true}}),
  432. secondWanted: 16,
  433. errWanted: errMissingBlock,
  434. },
  435. {
  436. name: "both blocks present",
  437. pool: makeBlockPool(testBcR, 10,
  438. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 100}},
  439. map[int64]tPBlocks{10: {"P1", true}, 11: {"P2", true}}),
  440. firstWanted: 10,
  441. secondWanted: 11,
  442. },
  443. }
  444. for _, tt := range tests {
  445. tt := tt
  446. t.Run(tt.name, func(t *testing.T) {
  447. pool := tt.pool
  448. gotFirst, gotSecond, err := pool.FirstTwoBlocksAndPeers()
  449. assert.Equal(t, tt.errWanted, err)
  450. if tt.firstWanted != 0 {
  451. peer := pool.blocks[tt.firstWanted]
  452. block := pool.peers[peer].blocks[tt.firstWanted]
  453. assert.Equal(t, block, gotFirst.block,
  454. "BlockPool.FirstTwoBlocksAndPeers() gotFirst = %v, want %v",
  455. tt.firstWanted, gotFirst.block.Height)
  456. }
  457. if tt.secondWanted != 0 {
  458. peer := pool.blocks[tt.secondWanted]
  459. block := pool.peers[peer].blocks[tt.secondWanted]
  460. assert.Equal(t, block, gotSecond.block,
  461. "BlockPool.FirstTwoBlocksAndPeers() gotFirst = %v, want %v",
  462. tt.secondWanted, gotSecond.block.Height)
  463. }
  464. })
  465. }
  466. }
  467. func TestBlockPoolInvalidateFirstTwoBlocks(t *testing.T) {
  468. testBcR := newTestBcR()
  469. tests := []struct {
  470. name string
  471. pool *BlockPool
  472. poolWanted *BlockPool
  473. }{
  474. {
  475. name: "both blocks missing",
  476. pool: makeBlockPool(testBcR, 10,
  477. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 100}},
  478. map[int64]tPBlocks{15: {"P1", true}, 16: {"P2", true}}),
  479. poolWanted: makeBlockPool(testBcR, 10,
  480. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 100}},
  481. map[int64]tPBlocks{15: {"P1", true}, 16: {"P2", true}}),
  482. },
  483. {
  484. name: "second block missing",
  485. pool: makeBlockPool(testBcR, 15,
  486. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 100}},
  487. map[int64]tPBlocks{15: {"P1", true}, 18: {"P2", true}}),
  488. poolWanted: makeBlockPool(testBcR, 15,
  489. []BpPeer{{ID: "P2", Height: 100}},
  490. map[int64]tPBlocks{18: {"P2", true}}),
  491. },
  492. {
  493. name: "first block missing",
  494. pool: makeBlockPool(testBcR, 15,
  495. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 100}},
  496. map[int64]tPBlocks{18: {"P1", true}, 16: {"P2", true}}),
  497. poolWanted: makeBlockPool(testBcR, 15,
  498. []BpPeer{{ID: "P1", Height: 100}},
  499. map[int64]tPBlocks{18: {"P1", true}}),
  500. },
  501. {
  502. name: "both blocks present",
  503. pool: makeBlockPool(testBcR, 10,
  504. []BpPeer{{ID: "P1", Height: 100}, {ID: "P2", Height: 100}},
  505. map[int64]tPBlocks{10: {"P1", true}, 11: {"P2", true}}),
  506. poolWanted: makeBlockPool(testBcR, 10,
  507. []BpPeer{},
  508. map[int64]tPBlocks{}),
  509. },
  510. }
  511. for _, tt := range tests {
  512. tt := tt
  513. t.Run(tt.name, func(t *testing.T) {
  514. tt.pool.InvalidateFirstTwoBlocks(errNoPeerResponse)
  515. assertBlockPoolEquivalent(t, tt.poolWanted, tt.pool)
  516. })
  517. }
  518. }
  519. func TestProcessedCurrentHeightBlock(t *testing.T) {
  520. testBcR := newTestBcR()
  521. tests := []struct {
  522. name string
  523. pool *BlockPool
  524. poolWanted *BlockPool
  525. }{
  526. {
  527. name: "one peer",
  528. pool: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 120}},
  529. map[int64]tPBlocks{100: {"P1", true}, 101: {"P1", true}}),
  530. poolWanted: makeBlockPool(testBcR, 101, []BpPeer{{ID: "P1", Height: 120}},
  531. map[int64]tPBlocks{101: {"P1", true}}),
  532. },
  533. {
  534. name: "multiple peers",
  535. pool: makeBlockPool(testBcR, 100,
  536. []BpPeer{{ID: "P1", Height: 120}, {ID: "P2", Height: 120}, {ID: "P3", Height: 130}},
  537. map[int64]tPBlocks{
  538. 100: {"P1", true}, 104: {"P1", true}, 105: {"P1", false},
  539. 101: {"P2", true}, 103: {"P2", false},
  540. 102: {"P3", true}, 106: {"P3", true}}),
  541. poolWanted: makeBlockPool(testBcR, 101,
  542. []BpPeer{{ID: "P1", Height: 120}, {ID: "P2", Height: 120}, {ID: "P3", Height: 130}},
  543. map[int64]tPBlocks{
  544. 104: {"P1", true}, 105: {"P1", false},
  545. 101: {"P2", true}, 103: {"P2", false},
  546. 102: {"P3", true}, 106: {"P3", true}}),
  547. },
  548. }
  549. for _, tt := range tests {
  550. tt := tt
  551. t.Run(tt.name, func(t *testing.T) {
  552. tt.pool.ProcessedCurrentHeightBlock()
  553. assertBlockPoolEquivalent(t, tt.poolWanted, tt.pool)
  554. })
  555. }
  556. }
  557. func TestRemovePeerAtCurrentHeight(t *testing.T) {
  558. testBcR := newTestBcR()
  559. tests := []struct {
  560. name string
  561. pool *BlockPool
  562. poolWanted *BlockPool
  563. }{
  564. {
  565. name: "one peer, remove peer for block at H",
  566. pool: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 120}},
  567. map[int64]tPBlocks{100: {"P1", false}, 101: {"P1", true}}),
  568. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{}, map[int64]tPBlocks{}),
  569. },
  570. {
  571. name: "one peer, remove peer for block at H+1",
  572. pool: makeBlockPool(testBcR, 100, []BpPeer{{ID: "P1", Height: 120}},
  573. map[int64]tPBlocks{100: {"P1", true}, 101: {"P1", false}}),
  574. poolWanted: makeBlockPool(testBcR, 100, []BpPeer{}, map[int64]tPBlocks{}),
  575. },
  576. {
  577. name: "multiple peers, remove peer for block at H",
  578. pool: makeBlockPool(testBcR, 100,
  579. []BpPeer{{ID: "P1", Height: 120}, {ID: "P2", Height: 120}, {ID: "P3", Height: 130}},
  580. map[int64]tPBlocks{
  581. 100: {"P1", false}, 104: {"P1", true}, 105: {"P1", false},
  582. 101: {"P2", true}, 103: {"P2", false},
  583. 102: {"P3", true}, 106: {"P3", true}}),
  584. poolWanted: makeBlockPool(testBcR, 100,
  585. []BpPeer{{ID: "P2", Height: 120}, {ID: "P3", Height: 130}},
  586. map[int64]tPBlocks{
  587. 101: {"P2", true}, 103: {"P2", false},
  588. 102: {"P3", true}, 106: {"P3", true}}),
  589. },
  590. {
  591. name: "multiple peers, remove peer for block at H+1",
  592. pool: makeBlockPool(testBcR, 100,
  593. []BpPeer{{ID: "P1", Height: 120}, {ID: "P2", Height: 120}, {ID: "P3", Height: 130}},
  594. map[int64]tPBlocks{
  595. 100: {"P1", true}, 104: {"P1", true}, 105: {"P1", false},
  596. 101: {"P2", false}, 103: {"P2", false},
  597. 102: {"P3", true}, 106: {"P3", true}}),
  598. poolWanted: makeBlockPool(testBcR, 100,
  599. []BpPeer{{ID: "P1", Height: 120}, {ID: "P3", Height: 130}},
  600. map[int64]tPBlocks{
  601. 100: {"P1", true}, 104: {"P1", true}, 105: {"P1", false},
  602. 102: {"P3", true}, 106: {"P3", true}}),
  603. },
  604. }
  605. for _, tt := range tests {
  606. tt := tt
  607. t.Run(tt.name, func(t *testing.T) {
  608. tt.pool.RemovePeerAtCurrentHeights(errNoPeerResponse)
  609. assertBlockPoolEquivalent(t, tt.poolWanted, tt.pool)
  610. })
  611. }
  612. }