You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

394 lines
12 KiB

8 years ago
new pubsub package comment out failing consensus tests for now rewrite rpc httpclient to use new pubsub package import pubsub as tmpubsub, query as tmquery make event IDs constants EventKey -> EventTypeKey rename EventsPubsub to PubSub mempool does not use pubsub rename eventsSub to pubsub new subscribe API fix channel size issues and consensus tests bugs refactor rpc client add missing discardFromChan method add mutex rename pubsub to eventBus remove IsRunning from WSRPCConnection interface (not needed) add a comment in broadcastNewRoundStepsAndVotes rename registerEventCallbacks to broadcastNewRoundStepsAndVotes See https://dave.cheney.net/2014/03/19/channel-axioms stop eventBuses after reactor tests remove unnecessary Unsubscribe return subscribe helper function move discardFromChan to where it is used subscribe now returns an err this gives us ability to refuse to subscribe if pubsub is at its max capacity. use context for control overflow cache queries handle err when subscribing in replay_test rename testClientID to testSubscriber extract var set channel buffer capacity to 1 in replay_file fix byzantine_test unsubscribe from single event, not all events refactor httpclient to return events to appropriate channels return failing testReplayCrashBeforeWriteVote test fix TestValidatorSetChanges refactor code a bit fix testReplayCrashBeforeWriteVote add comment fix TestValidatorSetChanges fixes from Bucky's review update comment [ci skip] test TxEventBuffer update changelog fix TestValidatorSetChanges (2nd attempt) only do wg.Done when no errors benchmark event bus create pubsub server inside NewEventBus only expose config params (later if needed) set buffer capacity to 0 so we are not testing cache new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ} This should allow to subscribe to all transactions! or a specific one using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'" use TimeoutCommit instead of afterPublishEventNewBlockTimeout TimeoutCommit is the time a node waits after committing a block, before it goes into the next height. So it will finish everything from the last block, but then wait a bit. The idea is this gives it time to hear more votes from other validators, to strengthen the commit it includes in the next block. But it also gives it time to hear about new transactions. waitForBlockWithUpdatedVals rewrite WAL crash tests Task: test that we can recover from any WAL crash. Solution: the old tests were relying on event hub being run in the same thread (we were injecting the private validator's last signature). when considering a rewrite, we considered two possible solutions: write a "fuzzy" testing system where WAL is crashing upon receiving a new message, or inject failures and trigger them in tests using something like https://github.com/coreos/gofail. remove sleep no cs.Lock around wal.Save test different cases (empty block, non-empty block, ...) comments add comments test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks fixes as per Bucky's last review reset subscriptions on UnsubscribeAll use a simple counter to track message for which we panicked also, set a smaller part size for all test cases
8 years ago
8 years ago
7 years ago
7 years ago
7 years ago
8 years ago
10 years ago
7 years ago
7 years ago
8 years ago
8 years ago
8 years ago
8 years ago
8 years ago
new pubsub package comment out failing consensus tests for now rewrite rpc httpclient to use new pubsub package import pubsub as tmpubsub, query as tmquery make event IDs constants EventKey -> EventTypeKey rename EventsPubsub to PubSub mempool does not use pubsub rename eventsSub to pubsub new subscribe API fix channel size issues and consensus tests bugs refactor rpc client add missing discardFromChan method add mutex rename pubsub to eventBus remove IsRunning from WSRPCConnection interface (not needed) add a comment in broadcastNewRoundStepsAndVotes rename registerEventCallbacks to broadcastNewRoundStepsAndVotes See https://dave.cheney.net/2014/03/19/channel-axioms stop eventBuses after reactor tests remove unnecessary Unsubscribe return subscribe helper function move discardFromChan to where it is used subscribe now returns an err this gives us ability to refuse to subscribe if pubsub is at its max capacity. use context for control overflow cache queries handle err when subscribing in replay_test rename testClientID to testSubscriber extract var set channel buffer capacity to 1 in replay_file fix byzantine_test unsubscribe from single event, not all events refactor httpclient to return events to appropriate channels return failing testReplayCrashBeforeWriteVote test fix TestValidatorSetChanges refactor code a bit fix testReplayCrashBeforeWriteVote add comment fix TestValidatorSetChanges fixes from Bucky's review update comment [ci skip] test TxEventBuffer update changelog fix TestValidatorSetChanges (2nd attempt) only do wg.Done when no errors benchmark event bus create pubsub server inside NewEventBus only expose config params (later if needed) set buffer capacity to 0 so we are not testing cache new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ} This should allow to subscribe to all transactions! or a specific one using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'" use TimeoutCommit instead of afterPublishEventNewBlockTimeout TimeoutCommit is the time a node waits after committing a block, before it goes into the next height. So it will finish everything from the last block, but then wait a bit. The idea is this gives it time to hear more votes from other validators, to strengthen the commit it includes in the next block. But it also gives it time to hear about new transactions. waitForBlockWithUpdatedVals rewrite WAL crash tests Task: test that we can recover from any WAL crash. Solution: the old tests were relying on event hub being run in the same thread (we were injecting the private validator's last signature). when considering a rewrite, we considered two possible solutions: write a "fuzzy" testing system where WAL is crashing upon receiving a new message, or inject failures and trigger them in tests using something like https://github.com/coreos/gofail. remove sleep no cs.Lock around wal.Save test different cases (empty block, non-empty block, ...) comments add comments test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks fixes as per Bucky's last review reset subscriptions on UnsubscribeAll use a simple counter to track message for which we panicked also, set a smaller part size for all test cases
8 years ago
new pubsub package comment out failing consensus tests for now rewrite rpc httpclient to use new pubsub package import pubsub as tmpubsub, query as tmquery make event IDs constants EventKey -> EventTypeKey rename EventsPubsub to PubSub mempool does not use pubsub rename eventsSub to pubsub new subscribe API fix channel size issues and consensus tests bugs refactor rpc client add missing discardFromChan method add mutex rename pubsub to eventBus remove IsRunning from WSRPCConnection interface (not needed) add a comment in broadcastNewRoundStepsAndVotes rename registerEventCallbacks to broadcastNewRoundStepsAndVotes See https://dave.cheney.net/2014/03/19/channel-axioms stop eventBuses after reactor tests remove unnecessary Unsubscribe return subscribe helper function move discardFromChan to where it is used subscribe now returns an err this gives us ability to refuse to subscribe if pubsub is at its max capacity. use context for control overflow cache queries handle err when subscribing in replay_test rename testClientID to testSubscriber extract var set channel buffer capacity to 1 in replay_file fix byzantine_test unsubscribe from single event, not all events refactor httpclient to return events to appropriate channels return failing testReplayCrashBeforeWriteVote test fix TestValidatorSetChanges refactor code a bit fix testReplayCrashBeforeWriteVote add comment fix TestValidatorSetChanges fixes from Bucky's review update comment [ci skip] test TxEventBuffer update changelog fix TestValidatorSetChanges (2nd attempt) only do wg.Done when no errors benchmark event bus create pubsub server inside NewEventBus only expose config params (later if needed) set buffer capacity to 0 so we are not testing cache new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ} This should allow to subscribe to all transactions! or a specific one using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'" use TimeoutCommit instead of afterPublishEventNewBlockTimeout TimeoutCommit is the time a node waits after committing a block, before it goes into the next height. So it will finish everything from the last block, but then wait a bit. The idea is this gives it time to hear more votes from other validators, to strengthen the commit it includes in the next block. But it also gives it time to hear about new transactions. waitForBlockWithUpdatedVals rewrite WAL crash tests Task: test that we can recover from any WAL crash. Solution: the old tests were relying on event hub being run in the same thread (we were injecting the private validator's last signature). when considering a rewrite, we considered two possible solutions: write a "fuzzy" testing system where WAL is crashing upon receiving a new message, or inject failures and trigger them in tests using something like https://github.com/coreos/gofail. remove sleep no cs.Lock around wal.Save test different cases (empty block, non-empty block, ...) comments add comments test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks fixes as per Bucky's last review reset subscriptions on UnsubscribeAll use a simple counter to track message for which we panicked also, set a smaller part size for all test cases
8 years ago
10 years ago
10 years ago
10 years ago
  1. package blockchain
  2. import (
  3. "bytes"
  4. "errors"
  5. "reflect"
  6. "time"
  7. wire "github.com/tendermint/go-wire"
  8. "github.com/tendermint/tendermint/p2p"
  9. "github.com/tendermint/tendermint/proxy"
  10. sm "github.com/tendermint/tendermint/state"
  11. "github.com/tendermint/tendermint/types"
  12. cmn "github.com/tendermint/tmlibs/common"
  13. "github.com/tendermint/tmlibs/log"
  14. )
  15. const (
  16. // BlockchainChannel is a channel for blocks and status updates (`BlockStore` height)
  17. BlockchainChannel = byte(0x40)
  18. defaultChannelCapacity = 1000
  19. trySyncIntervalMS = 50
  20. // stop syncing when last block's time is
  21. // within this much of the system time.
  22. // stopSyncingDurationMinutes = 10
  23. // ask for best height every 10s
  24. statusUpdateIntervalSeconds = 10
  25. // check if we should switch to consensus reactor
  26. switchToConsensusIntervalSeconds = 1
  27. )
  28. type consensusReactor interface {
  29. // for when we switch from blockchain reactor and fast sync to
  30. // the consensus machine
  31. SwitchToConsensus(*sm.State, int)
  32. }
  33. // BlockchainReactor handles long-term catchup syncing.
  34. type BlockchainReactor struct {
  35. p2p.BaseReactor
  36. state *sm.State
  37. proxyAppConn proxy.AppConnConsensus // same as consensus.proxyAppConn
  38. store *BlockStore
  39. pool *BlockPool
  40. fastSync bool
  41. requestsCh chan BlockRequest
  42. timeoutsCh chan string
  43. eventBus *types.EventBus
  44. }
  45. // NewBlockchainReactor returns new reactor instance.
  46. func NewBlockchainReactor(state *sm.State, proxyAppConn proxy.AppConnConsensus, store *BlockStore, fastSync bool) *BlockchainReactor {
  47. if state.LastBlockHeight == store.Height()-1 {
  48. store.height-- // XXX HACK, make this better
  49. }
  50. if state.LastBlockHeight != store.Height() {
  51. cmn.PanicSanity(cmn.Fmt("state (%v) and store (%v) height mismatch", state.LastBlockHeight, store.Height()))
  52. }
  53. requestsCh := make(chan BlockRequest, defaultChannelCapacity)
  54. timeoutsCh := make(chan string, defaultChannelCapacity)
  55. pool := NewBlockPool(
  56. store.Height()+1,
  57. requestsCh,
  58. timeoutsCh,
  59. )
  60. bcR := &BlockchainReactor{
  61. state: state,
  62. proxyAppConn: proxyAppConn,
  63. store: store,
  64. pool: pool,
  65. fastSync: fastSync,
  66. requestsCh: requestsCh,
  67. timeoutsCh: timeoutsCh,
  68. }
  69. bcR.BaseReactor = *p2p.NewBaseReactor("BlockchainReactor", bcR)
  70. return bcR
  71. }
  72. // SetLogger implements cmn.Service by setting the logger on reactor and pool.
  73. func (bcR *BlockchainReactor) SetLogger(l log.Logger) {
  74. bcR.BaseService.Logger = l
  75. bcR.pool.Logger = l
  76. }
  77. // OnStart implements cmn.Service.
  78. func (bcR *BlockchainReactor) OnStart() error {
  79. if err := bcR.BaseReactor.OnStart(); err != nil {
  80. return err
  81. }
  82. if bcR.fastSync {
  83. err := bcR.pool.Start()
  84. if err != nil {
  85. return err
  86. }
  87. go bcR.poolRoutine()
  88. }
  89. return nil
  90. }
  91. // OnStop implements cmn.Service.
  92. func (bcR *BlockchainReactor) OnStop() {
  93. bcR.BaseReactor.OnStop()
  94. bcR.pool.Stop()
  95. }
  96. // GetChannels implements Reactor
  97. func (bcR *BlockchainReactor) GetChannels() []*p2p.ChannelDescriptor {
  98. return []*p2p.ChannelDescriptor{
  99. {
  100. ID: BlockchainChannel,
  101. Priority: 10,
  102. SendQueueCapacity: 1000,
  103. },
  104. }
  105. }
  106. // AddPeer implements Reactor by sending our state to peer.
  107. func (bcR *BlockchainReactor) AddPeer(peer p2p.Peer) {
  108. if !peer.Send(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}}) {
  109. // doing nothing, will try later in `poolRoutine`
  110. }
  111. // peer is added to the pool once we receive the first
  112. // bcStatusResponseMessage from the peer and call pool.SetPeerHeight
  113. }
  114. // RemovePeer implements Reactor by removing peer from the pool.
  115. func (bcR *BlockchainReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
  116. bcR.pool.RemovePeer(peer.Key())
  117. }
  118. // respondToPeer loads a block and sends it to the requesting peer,
  119. // if we have it. Otherwise, we'll respond saying we don't have it.
  120. // According to the Tendermint spec, if all nodes are honest,
  121. // no node should be requesting for a block that's non-existent.
  122. func (bcR *BlockchainReactor) respondToPeer(msg *bcBlockRequestMessage, src p2p.Peer) (queued bool) {
  123. block := bcR.store.LoadBlock(msg.Height)
  124. if block != nil {
  125. msg := &bcBlockResponseMessage{Block: block}
  126. return src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{msg})
  127. }
  128. bcR.Logger.Info("Peer asking for a block we don't have", "src", src, "height", msg.Height)
  129. return src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{
  130. &bcNoBlockResponseMessage{Height: msg.Height},
  131. })
  132. }
  133. // Receive implements Reactor by handling 4 types of messages (look below).
  134. func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
  135. _, msg, err := DecodeMessage(msgBytes, bcR.maxMsgSize())
  136. if err != nil {
  137. bcR.Logger.Error("Error decoding message", "err", err)
  138. return
  139. }
  140. bcR.Logger.Debug("Receive", "src", src, "chID", chID, "msg", msg)
  141. // TODO: improve logic to satisfy megacheck
  142. switch msg := msg.(type) {
  143. case *bcBlockRequestMessage:
  144. if queued := bcR.respondToPeer(msg, src); !queued {
  145. // Unfortunately not queued since the queue is full.
  146. }
  147. case *bcBlockResponseMessage:
  148. // Got a block.
  149. bcR.pool.AddBlock(src.Key(), msg.Block, len(msgBytes))
  150. case *bcStatusRequestMessage:
  151. // Send peer our state.
  152. queued := src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}})
  153. if !queued {
  154. // sorry
  155. }
  156. case *bcStatusResponseMessage:
  157. // Got a peer status. Unverified.
  158. bcR.pool.SetPeerHeight(src.Key(), msg.Height)
  159. default:
  160. bcR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg)))
  161. }
  162. }
  163. // maxMsgSize returns the maximum allowable size of a
  164. // message on the blockchain reactor.
  165. func (bcR *BlockchainReactor) maxMsgSize() int {
  166. return bcR.state.Params.BlockSizeParams.MaxBytes + 2
  167. }
  168. // Handle messages from the poolReactor telling the reactor what to do.
  169. // NOTE: Don't sleep in the FOR_LOOP or otherwise slow it down!
  170. // (Except for the SYNC_LOOP, which is the primary purpose and must be synchronous.)
  171. func (bcR *BlockchainReactor) poolRoutine() {
  172. trySyncTicker := time.NewTicker(trySyncIntervalMS * time.Millisecond)
  173. statusUpdateTicker := time.NewTicker(statusUpdateIntervalSeconds * time.Second)
  174. switchToConsensusTicker := time.NewTicker(switchToConsensusIntervalSeconds * time.Second)
  175. blocksSynced := 0
  176. chainID := bcR.state.ChainID
  177. lastHundred := time.Now()
  178. lastRate := 0.0
  179. FOR_LOOP:
  180. for {
  181. select {
  182. case request := <-bcR.requestsCh: // chan BlockRequest
  183. peer := bcR.Switch.Peers().Get(request.PeerID)
  184. if peer == nil {
  185. continue FOR_LOOP // Peer has since been disconnected.
  186. }
  187. msg := &bcBlockRequestMessage{request.Height}
  188. queued := peer.TrySend(BlockchainChannel, struct{ BlockchainMessage }{msg})
  189. if !queued {
  190. // We couldn't make the request, send-queue full.
  191. // The pool handles timeouts, just let it go.
  192. continue FOR_LOOP
  193. }
  194. case peerID := <-bcR.timeoutsCh: // chan string
  195. // Peer timed out.
  196. peer := bcR.Switch.Peers().Get(peerID)
  197. if peer != nil {
  198. bcR.Switch.StopPeerForError(peer, errors.New("BlockchainReactor Timeout"))
  199. }
  200. case <-statusUpdateTicker.C:
  201. // ask for status updates
  202. go bcR.BroadcastStatusRequest() // nolint: errcheck
  203. case <-switchToConsensusTicker.C:
  204. height, numPending, lenRequesters := bcR.pool.GetStatus()
  205. outbound, inbound, _ := bcR.Switch.NumPeers()
  206. bcR.Logger.Debug("Consensus ticker", "numPending", numPending, "total", lenRequesters,
  207. "outbound", outbound, "inbound", inbound)
  208. if bcR.pool.IsCaughtUp() {
  209. bcR.Logger.Info("Time to switch to consensus reactor!", "height", height)
  210. bcR.pool.Stop()
  211. conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
  212. conR.SwitchToConsensus(bcR.state, blocksSynced)
  213. break FOR_LOOP
  214. }
  215. case <-trySyncTicker.C: // chan time
  216. // This loop can be slow as long as it's doing syncing work.
  217. SYNC_LOOP:
  218. for i := 0; i < 10; i++ {
  219. // See if there are any blocks to sync.
  220. first, second := bcR.pool.PeekTwoBlocks()
  221. //bcR.Logger.Info("TrySync peeked", "first", first, "second", second)
  222. if first == nil || second == nil {
  223. // We need both to sync the first block.
  224. break SYNC_LOOP
  225. }
  226. firstParts := first.MakePartSet(bcR.state.Params.BlockPartSizeBytes)
  227. firstPartsHeader := firstParts.Header()
  228. // Finally, verify the first block using the second's commit
  229. // NOTE: we can probably make this more efficient, but note that calling
  230. // first.Hash() doesn't verify the tx contents, so MakePartSet() is
  231. // currently necessary.
  232. err := bcR.state.Validators.VerifyCommit(
  233. chainID, types.BlockID{first.Hash(), firstPartsHeader}, first.Height, second.LastCommit)
  234. if err != nil {
  235. bcR.Logger.Error("Error in validation", "err", err)
  236. bcR.pool.RedoRequest(first.Height)
  237. break SYNC_LOOP
  238. } else {
  239. bcR.pool.PopRequest()
  240. bcR.store.SaveBlock(first, firstParts, second.LastCommit)
  241. // TODO: should we be firing events? need to fire NewBlock events manually ...
  242. // NOTE: we could improve performance if we
  243. // didn't make the app commit to disk every block
  244. // ... but we would need a way to get the hash without it persisting
  245. err := bcR.state.ApplyBlock(bcR.eventBus, bcR.proxyAppConn, first, firstPartsHeader, types.MockMempool{})
  246. if err != nil {
  247. // TODO This is bad, are we zombie?
  248. cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v", first.Height, first.Hash(), err))
  249. }
  250. blocksSynced += 1
  251. if blocksSynced%100 == 0 {
  252. lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
  253. bcR.Logger.Info("Fast Sync Rate", "height", bcR.pool.height,
  254. "max_peer_height", bcR.pool.MaxPeerHeight(), "blocks/s", lastRate)
  255. lastHundred = time.Now()
  256. }
  257. }
  258. }
  259. continue FOR_LOOP
  260. case <-bcR.Quit:
  261. break FOR_LOOP
  262. }
  263. }
  264. }
  265. // BroadcastStatusRequest broadcasts `BlockStore` height.
  266. func (bcR *BlockchainReactor) BroadcastStatusRequest() error {
  267. bcR.Switch.Broadcast(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusRequestMessage{bcR.store.Height()}})
  268. return nil
  269. }
  270. // SetEventBus sets event bus.
  271. func (bcR *BlockchainReactor) SetEventBus(b *types.EventBus) {
  272. bcR.eventBus = b
  273. }
  274. //-----------------------------------------------------------------------------
  275. // Messages
  276. const (
  277. msgTypeBlockRequest = byte(0x10)
  278. msgTypeBlockResponse = byte(0x11)
  279. msgTypeNoBlockResponse = byte(0x12)
  280. msgTypeStatusResponse = byte(0x20)
  281. msgTypeStatusRequest = byte(0x21)
  282. )
  283. // BlockchainMessage is a generic message for this reactor.
  284. type BlockchainMessage interface{}
  285. var _ = wire.RegisterInterface(
  286. struct{ BlockchainMessage }{},
  287. wire.ConcreteType{&bcBlockRequestMessage{}, msgTypeBlockRequest},
  288. wire.ConcreteType{&bcBlockResponseMessage{}, msgTypeBlockResponse},
  289. wire.ConcreteType{&bcNoBlockResponseMessage{}, msgTypeNoBlockResponse},
  290. wire.ConcreteType{&bcStatusResponseMessage{}, msgTypeStatusResponse},
  291. wire.ConcreteType{&bcStatusRequestMessage{}, msgTypeStatusRequest},
  292. )
  293. // DecodeMessage decodes BlockchainMessage.
  294. // TODO: ensure that bz is completely read.
  295. func DecodeMessage(bz []byte, maxSize int) (msgType byte, msg BlockchainMessage, err error) {
  296. msgType = bz[0]
  297. n := int(0)
  298. r := bytes.NewReader(bz)
  299. msg = wire.ReadBinary(struct{ BlockchainMessage }{}, r, maxSize, &n, &err).(struct{ BlockchainMessage }).BlockchainMessage
  300. if err != nil && n != len(bz) {
  301. err = errors.New("DecodeMessage() had bytes left over")
  302. }
  303. return
  304. }
  305. //-------------------------------------
  306. type bcBlockRequestMessage struct {
  307. Height int
  308. }
  309. func (m *bcBlockRequestMessage) String() string {
  310. return cmn.Fmt("[bcBlockRequestMessage %v]", m.Height)
  311. }
  312. type bcNoBlockResponseMessage struct {
  313. Height int
  314. }
  315. func (brm *bcNoBlockResponseMessage) String() string {
  316. return cmn.Fmt("[bcNoBlockResponseMessage %d]", brm.Height)
  317. }
  318. //-------------------------------------
  319. // NOTE: keep up-to-date with maxBlockchainResponseSize
  320. type bcBlockResponseMessage struct {
  321. Block *types.Block
  322. }
  323. func (m *bcBlockResponseMessage) String() string {
  324. return cmn.Fmt("[bcBlockResponseMessage %v]", m.Block.Height)
  325. }
  326. //-------------------------------------
  327. type bcStatusRequestMessage struct {
  328. Height int
  329. }
  330. func (m *bcStatusRequestMessage) String() string {
  331. return cmn.Fmt("[bcStatusRequestMessage %v]", m.Height)
  332. }
  333. //-------------------------------------
  334. type bcStatusResponseMessage struct {
  335. Height int
  336. }
  337. func (m *bcStatusResponseMessage) String() string {
  338. return cmn.Fmt("[bcStatusResponseMessage %v]", m.Height)
  339. }