You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

294 lines
8.4 KiB

blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
10 years ago
8 years ago
blockchain: Reorg reactor (#3561) * go routines in blockchain reactor * Added reference to the go routine diagram * Initial commit * cleanup * Undo testing_logger change, committed by mistake * Fix the test loggers * pulled some fsm code into pool.go * added pool tests * changes to the design added block requests under peer moved the request trigger in the reactor poolRoutine, triggered now by a ticker in general moved everything required for making block requests smarter in the poolRoutine added a simple map of heights to keep track of what will need to be requested next added a few more tests * send errors to FSM in a different channel than blocks send errors (RemovePeer) from switch on a different channel than the one receiving blocks renamed channels added more pool tests * more pool tests * lint errors * more tests * more tests * switch fast sync to new implementation * fixed data race in tests * cleanup * finished fsm tests * address golangci comments :) * address golangci comments :) * Added timeout on next block needed to advance * updating docs and cleanup * fix issue in test from previous cleanup * cleanup * Added termination scenarios, tests and more cleanup * small fixes to adr, comments and cleanup * Fix bug in sendRequest() If we tried to send a request to a peer not present in the switch, a missing continue statement caused the request to be blackholed in a peer that was removed and never retried. While this bug was manifesting, the reactor kept asking for other blocks that would be stored and never consumed. Added the number of unconsumed blocks in the math for requesting blocks ahead of current processing height so eventually there will be no more blocks requested until the already received ones are consumed. * remove bpPeer's didTimeout field * Use distinct err codes for peer timeout and FSM timeouts * Don't allow peers to update with lower height * review comments from Ethan and Zarko * some cleanup, renaming, comments * Move block execution in separate goroutine * Remove pool's numPending * review comments * fix lint, remove old blockchain reactor and duplicates in fsm tests * small reorg around peer after review comments * add the reactor spec * verify block only once * review comments * change to int for max number of pending requests * cleanup and godoc * Add configuration flag fast sync version * golangci fixes * fix config template * move both reactor versions under blockchain * cleanup, golint, renaming stuff * updated documentation, fixed more golint warnings * integrate with behavior package * sync with master * gofmt * add changelog_pending entry * move to improvments * suggestion to changelog entry
5 years ago
6 years ago
6 years ago
  1. package store
  2. import (
  3. "fmt"
  4. "strconv"
  5. "sync"
  6. "github.com/pkg/errors"
  7. dbm "github.com/tendermint/tm-db"
  8. "github.com/tendermint/tendermint/types"
  9. )
  10. /*
  11. BlockStore is a simple low level store for blocks.
  12. There are three types of information stored:
  13. - BlockMeta: Meta information about each block
  14. - Block part: Parts of each block, aggregated w/ PartSet
  15. - Commit: The commit part of each block, for gossiping precommit votes
  16. Currently the precommit signatures are duplicated in the Block parts as
  17. well as the Commit. In the future this may change, perhaps by moving
  18. the Commit data outside the Block. (TODO)
  19. // NOTE: BlockStore methods will panic if they encounter errors
  20. // deserializing loaded data, indicating probable corruption on disk.
  21. */
  22. type BlockStore struct {
  23. db dbm.DB
  24. mtx sync.RWMutex
  25. height int64
  26. }
  27. // NewBlockStore returns a new BlockStore with the given DB,
  28. // initialized to the last height that was committed to the DB.
  29. func NewBlockStore(db dbm.DB) *BlockStore {
  30. bsjson := LoadBlockStoreStateJSON(db)
  31. return &BlockStore{
  32. height: bsjson.Height,
  33. db: db,
  34. }
  35. }
  36. // Height returns the last known contiguous block height.
  37. func (bs *BlockStore) Height() int64 {
  38. bs.mtx.RLock()
  39. defer bs.mtx.RUnlock()
  40. return bs.height
  41. }
  42. // LoadBlock returns the block with the given height.
  43. // If no block is found for that height, it returns nil.
  44. func (bs *BlockStore) LoadBlock(height int64) *types.Block {
  45. var blockMeta = bs.LoadBlockMeta(height)
  46. if blockMeta == nil {
  47. return nil
  48. }
  49. var block = new(types.Block)
  50. buf := []byte{}
  51. for i := 0; i < blockMeta.BlockID.PartsHeader.Total; i++ {
  52. part := bs.LoadBlockPart(height, i)
  53. buf = append(buf, part.Bytes...)
  54. }
  55. err := cdc.UnmarshalBinaryLengthPrefixed(buf, block)
  56. if err != nil {
  57. // NOTE: The existence of meta should imply the existence of the
  58. // block. So, make sure meta is only saved after blocks are saved.
  59. panic(errors.Wrap(err, "Error reading block"))
  60. }
  61. return block
  62. }
  63. // LoadBlockByHash returns the block with the given hash.
  64. // If no block is found for that hash, it returns nil.
  65. // Panics if it fails to parse height associated with the given hash.
  66. func (bs *BlockStore) LoadBlockByHash(hash []byte) *types.Block {
  67. bz, err := bs.db.Get(calcBlockHashKey(hash))
  68. if err != nil {
  69. panic(err)
  70. }
  71. if len(bz) == 0 {
  72. return nil
  73. }
  74. s := string(bz)
  75. height, err := strconv.ParseInt(s, 10, 64)
  76. if err != nil {
  77. panic(errors.Wrapf(err, "failed to extract height from %s", s))
  78. }
  79. return bs.LoadBlock(height)
  80. }
  81. // LoadBlockPart returns the Part at the given index
  82. // from the block at the given height.
  83. // If no part is found for the given height and index, it returns nil.
  84. func (bs *BlockStore) LoadBlockPart(height int64, index int) *types.Part {
  85. var part = new(types.Part)
  86. bz, err := bs.db.Get(calcBlockPartKey(height, index))
  87. if err != nil {
  88. panic(err)
  89. }
  90. if len(bz) == 0 {
  91. return nil
  92. }
  93. err = cdc.UnmarshalBinaryBare(bz, part)
  94. if err != nil {
  95. panic(errors.Wrap(err, "Error reading block part"))
  96. }
  97. return part
  98. }
  99. // LoadBlockMeta returns the BlockMeta for the given height.
  100. // If no block is found for the given height, it returns nil.
  101. func (bs *BlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
  102. var blockMeta = new(types.BlockMeta)
  103. bz, err := bs.db.Get(calcBlockMetaKey(height))
  104. if err != nil {
  105. panic(err)
  106. }
  107. if len(bz) == 0 {
  108. return nil
  109. }
  110. err = cdc.UnmarshalBinaryBare(bz, blockMeta)
  111. if err != nil {
  112. panic(errors.Wrap(err, "Error reading block meta"))
  113. }
  114. return blockMeta
  115. }
  116. // LoadBlockCommit returns the Commit for the given height.
  117. // This commit consists of the +2/3 and other Precommit-votes for block at `height`,
  118. // and it comes from the block.LastCommit for `height+1`.
  119. // If no commit is found for the given height, it returns nil.
  120. func (bs *BlockStore) LoadBlockCommit(height int64) *types.Commit {
  121. var commit = new(types.Commit)
  122. bz, err := bs.db.Get(calcBlockCommitKey(height))
  123. if err != nil {
  124. panic(err)
  125. }
  126. if len(bz) == 0 {
  127. return nil
  128. }
  129. err = cdc.UnmarshalBinaryBare(bz, commit)
  130. if err != nil {
  131. panic(errors.Wrap(err, "Error reading block commit"))
  132. }
  133. return commit
  134. }
  135. // LoadSeenCommit returns the locally seen Commit for the given height.
  136. // This is useful when we've seen a commit, but there has not yet been
  137. // a new block at `height + 1` that includes this commit in its block.LastCommit.
  138. func (bs *BlockStore) LoadSeenCommit(height int64) *types.Commit {
  139. var commit = new(types.Commit)
  140. bz, err := bs.db.Get(calcSeenCommitKey(height))
  141. if err != nil {
  142. panic(err)
  143. }
  144. if len(bz) == 0 {
  145. return nil
  146. }
  147. err = cdc.UnmarshalBinaryBare(bz, commit)
  148. if err != nil {
  149. panic(errors.Wrap(err, "Error reading block seen commit"))
  150. }
  151. return commit
  152. }
  153. // SaveBlock persists the given block, blockParts, and seenCommit to the underlying db.
  154. // blockParts: Must be parts of the block
  155. // seenCommit: The +2/3 precommits that were seen which committed at height.
  156. // If all the nodes restart after committing a block,
  157. // we need this to reload the precommits to catch-up nodes to the
  158. // most recent height. Otherwise they'd stall at H-1.
  159. func (bs *BlockStore) SaveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) {
  160. if block == nil {
  161. panic("BlockStore can only save a non-nil block")
  162. }
  163. height := block.Height
  164. hash := block.Hash()
  165. if g, w := height, bs.Height()+1; g != w {
  166. panic(fmt.Sprintf("BlockStore can only save contiguous blocks. Wanted %v, got %v", w, g))
  167. }
  168. if !blockParts.IsComplete() {
  169. panic(fmt.Sprintf("BlockStore can only save complete block part sets"))
  170. }
  171. // Save block meta
  172. blockMeta := types.NewBlockMeta(block, blockParts)
  173. metaBytes := cdc.MustMarshalBinaryBare(blockMeta)
  174. bs.db.Set(calcBlockMetaKey(height), metaBytes)
  175. bs.db.Set(calcBlockHashKey(hash), []byte(fmt.Sprintf("%d", height)))
  176. // Save block parts
  177. for i := 0; i < blockParts.Total(); i++ {
  178. part := blockParts.GetPart(i)
  179. bs.saveBlockPart(height, i, part)
  180. }
  181. // Save block commit (duplicate and separate from the Block)
  182. blockCommitBytes := cdc.MustMarshalBinaryBare(block.LastCommit)
  183. bs.db.Set(calcBlockCommitKey(height-1), blockCommitBytes)
  184. // Save seen commit (seen +2/3 precommits for block)
  185. // NOTE: we can delete this at a later height
  186. seenCommitBytes := cdc.MustMarshalBinaryBare(seenCommit)
  187. bs.db.Set(calcSeenCommitKey(height), seenCommitBytes)
  188. // Save new BlockStoreStateJSON descriptor
  189. BlockStoreStateJSON{Height: height}.Save(bs.db)
  190. // Done!
  191. bs.mtx.Lock()
  192. bs.height = height
  193. bs.mtx.Unlock()
  194. // Flush
  195. bs.db.SetSync(nil, nil)
  196. }
  197. func (bs *BlockStore) saveBlockPart(height int64, index int, part *types.Part) {
  198. if height != bs.Height()+1 {
  199. panic(fmt.Sprintf("BlockStore can only save contiguous blocks. Wanted %v, got %v", bs.Height()+1, height))
  200. }
  201. partBytes := cdc.MustMarshalBinaryBare(part)
  202. bs.db.Set(calcBlockPartKey(height, index), partBytes)
  203. }
  204. //-----------------------------------------------------------------------------
  205. func calcBlockMetaKey(height int64) []byte {
  206. return []byte(fmt.Sprintf("H:%v", height))
  207. }
  208. func calcBlockPartKey(height int64, partIndex int) []byte {
  209. return []byte(fmt.Sprintf("P:%v:%v", height, partIndex))
  210. }
  211. func calcBlockCommitKey(height int64) []byte {
  212. return []byte(fmt.Sprintf("C:%v", height))
  213. }
  214. func calcSeenCommitKey(height int64) []byte {
  215. return []byte(fmt.Sprintf("SC:%v", height))
  216. }
  217. func calcBlockHashKey(hash []byte) []byte {
  218. return []byte(fmt.Sprintf("BH:%x", hash))
  219. }
  220. //-----------------------------------------------------------------------------
  221. var blockStoreKey = []byte("blockStore")
  222. // BlockStoreStateJSON is the block store state JSON structure.
  223. type BlockStoreStateJSON struct {
  224. Height int64 `json:"height"`
  225. }
  226. // Save persists the blockStore state to the database as JSON.
  227. func (bsj BlockStoreStateJSON) Save(db dbm.DB) {
  228. bytes, err := cdc.MarshalJSON(bsj)
  229. if err != nil {
  230. panic(fmt.Sprintf("Could not marshal state bytes: %v", err))
  231. }
  232. db.SetSync(blockStoreKey, bytes)
  233. }
  234. // LoadBlockStoreStateJSON returns the BlockStoreStateJSON as loaded from disk.
  235. // If no BlockStoreStateJSON was previously persisted, it returns the zero value.
  236. func LoadBlockStoreStateJSON(db dbm.DB) BlockStoreStateJSON {
  237. bytes, err := db.Get(blockStoreKey)
  238. if err != nil {
  239. panic(err)
  240. }
  241. if len(bytes) == 0 {
  242. return BlockStoreStateJSON{
  243. Height: 0,
  244. }
  245. }
  246. bsj := BlockStoreStateJSON{}
  247. err = cdc.UnmarshalJSON(bytes, &bsj)
  248. if err != nil {
  249. panic(fmt.Sprintf("Could not unmarshal bytes: %X", bytes))
  250. }
  251. return bsj
  252. }