You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

392 lines
12 KiB

8 years ago
new pubsub package comment out failing consensus tests for now rewrite rpc httpclient to use new pubsub package import pubsub as tmpubsub, query as tmquery make event IDs constants EventKey -> EventTypeKey rename EventsPubsub to PubSub mempool does not use pubsub rename eventsSub to pubsub new subscribe API fix channel size issues and consensus tests bugs refactor rpc client add missing discardFromChan method add mutex rename pubsub to eventBus remove IsRunning from WSRPCConnection interface (not needed) add a comment in broadcastNewRoundStepsAndVotes rename registerEventCallbacks to broadcastNewRoundStepsAndVotes See https://dave.cheney.net/2014/03/19/channel-axioms stop eventBuses after reactor tests remove unnecessary Unsubscribe return subscribe helper function move discardFromChan to where it is used subscribe now returns an err this gives us ability to refuse to subscribe if pubsub is at its max capacity. use context for control overflow cache queries handle err when subscribing in replay_test rename testClientID to testSubscriber extract var set channel buffer capacity to 1 in replay_file fix byzantine_test unsubscribe from single event, not all events refactor httpclient to return events to appropriate channels return failing testReplayCrashBeforeWriteVote test fix TestValidatorSetChanges refactor code a bit fix testReplayCrashBeforeWriteVote add comment fix TestValidatorSetChanges fixes from Bucky's review update comment [ci skip] test TxEventBuffer update changelog fix TestValidatorSetChanges (2nd attempt) only do wg.Done when no errors benchmark event bus create pubsub server inside NewEventBus only expose config params (later if needed) set buffer capacity to 0 so we are not testing cache new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ} This should allow to subscribe to all transactions! or a specific one using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'" use TimeoutCommit instead of afterPublishEventNewBlockTimeout TimeoutCommit is the time a node waits after committing a block, before it goes into the next height. So it will finish everything from the last block, but then wait a bit. The idea is this gives it time to hear more votes from other validators, to strengthen the commit it includes in the next block. But it also gives it time to hear about new transactions. waitForBlockWithUpdatedVals rewrite WAL crash tests Task: test that we can recover from any WAL crash. Solution: the old tests were relying on event hub being run in the same thread (we were injecting the private validator's last signature). when considering a rewrite, we considered two possible solutions: write a "fuzzy" testing system where WAL is crashing upon receiving a new message, or inject failures and trigger them in tests using something like https://github.com/coreos/gofail. remove sleep no cs.Lock around wal.Save test different cases (empty block, non-empty block, ...) comments add comments test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks fixes as per Bucky's last review reset subscriptions on UnsubscribeAll use a simple counter to track message for which we panicked also, set a smaller part size for all test cases
8 years ago
8 years ago
7 years ago
7 years ago
7 years ago
8 years ago
10 years ago
7 years ago
7 years ago
8 years ago
8 years ago
8 years ago
8 years ago
8 years ago
new pubsub package comment out failing consensus tests for now rewrite rpc httpclient to use new pubsub package import pubsub as tmpubsub, query as tmquery make event IDs constants EventKey -> EventTypeKey rename EventsPubsub to PubSub mempool does not use pubsub rename eventsSub to pubsub new subscribe API fix channel size issues and consensus tests bugs refactor rpc client add missing discardFromChan method add mutex rename pubsub to eventBus remove IsRunning from WSRPCConnection interface (not needed) add a comment in broadcastNewRoundStepsAndVotes rename registerEventCallbacks to broadcastNewRoundStepsAndVotes See https://dave.cheney.net/2014/03/19/channel-axioms stop eventBuses after reactor tests remove unnecessary Unsubscribe return subscribe helper function move discardFromChan to where it is used subscribe now returns an err this gives us ability to refuse to subscribe if pubsub is at its max capacity. use context for control overflow cache queries handle err when subscribing in replay_test rename testClientID to testSubscriber extract var set channel buffer capacity to 1 in replay_file fix byzantine_test unsubscribe from single event, not all events refactor httpclient to return events to appropriate channels return failing testReplayCrashBeforeWriteVote test fix TestValidatorSetChanges refactor code a bit fix testReplayCrashBeforeWriteVote add comment fix TestValidatorSetChanges fixes from Bucky's review update comment [ci skip] test TxEventBuffer update changelog fix TestValidatorSetChanges (2nd attempt) only do wg.Done when no errors benchmark event bus create pubsub server inside NewEventBus only expose config params (later if needed) set buffer capacity to 0 so we are not testing cache new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ} This should allow to subscribe to all transactions! or a specific one using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'" use TimeoutCommit instead of afterPublishEventNewBlockTimeout TimeoutCommit is the time a node waits after committing a block, before it goes into the next height. So it will finish everything from the last block, but then wait a bit. The idea is this gives it time to hear more votes from other validators, to strengthen the commit it includes in the next block. But it also gives it time to hear about new transactions. waitForBlockWithUpdatedVals rewrite WAL crash tests Task: test that we can recover from any WAL crash. Solution: the old tests were relying on event hub being run in the same thread (we were injecting the private validator's last signature). when considering a rewrite, we considered two possible solutions: write a "fuzzy" testing system where WAL is crashing upon receiving a new message, or inject failures and trigger them in tests using something like https://github.com/coreos/gofail. remove sleep no cs.Lock around wal.Save test different cases (empty block, non-empty block, ...) comments add comments test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks fixes as per Bucky's last review reset subscriptions on UnsubscribeAll use a simple counter to track message for which we panicked also, set a smaller part size for all test cases
8 years ago
new pubsub package comment out failing consensus tests for now rewrite rpc httpclient to use new pubsub package import pubsub as tmpubsub, query as tmquery make event IDs constants EventKey -> EventTypeKey rename EventsPubsub to PubSub mempool does not use pubsub rename eventsSub to pubsub new subscribe API fix channel size issues and consensus tests bugs refactor rpc client add missing discardFromChan method add mutex rename pubsub to eventBus remove IsRunning from WSRPCConnection interface (not needed) add a comment in broadcastNewRoundStepsAndVotes rename registerEventCallbacks to broadcastNewRoundStepsAndVotes See https://dave.cheney.net/2014/03/19/channel-axioms stop eventBuses after reactor tests remove unnecessary Unsubscribe return subscribe helper function move discardFromChan to where it is used subscribe now returns an err this gives us ability to refuse to subscribe if pubsub is at its max capacity. use context for control overflow cache queries handle err when subscribing in replay_test rename testClientID to testSubscriber extract var set channel buffer capacity to 1 in replay_file fix byzantine_test unsubscribe from single event, not all events refactor httpclient to return events to appropriate channels return failing testReplayCrashBeforeWriteVote test fix TestValidatorSetChanges refactor code a bit fix testReplayCrashBeforeWriteVote add comment fix TestValidatorSetChanges fixes from Bucky's review update comment [ci skip] test TxEventBuffer update changelog fix TestValidatorSetChanges (2nd attempt) only do wg.Done when no errors benchmark event bus create pubsub server inside NewEventBus only expose config params (later if needed) set buffer capacity to 0 so we are not testing cache new tx event format: key = "Tx" plus a tag {"tx.hash": XYZ} This should allow to subscribe to all transactions! or a specific one using a query: "tm.events.type = Tx and tx.hash = '013ABF99434...'" use TimeoutCommit instead of afterPublishEventNewBlockTimeout TimeoutCommit is the time a node waits after committing a block, before it goes into the next height. So it will finish everything from the last block, but then wait a bit. The idea is this gives it time to hear more votes from other validators, to strengthen the commit it includes in the next block. But it also gives it time to hear about new transactions. waitForBlockWithUpdatedVals rewrite WAL crash tests Task: test that we can recover from any WAL crash. Solution: the old tests were relying on event hub being run in the same thread (we were injecting the private validator's last signature). when considering a rewrite, we considered two possible solutions: write a "fuzzy" testing system where WAL is crashing upon receiving a new message, or inject failures and trigger them in tests using something like https://github.com/coreos/gofail. remove sleep no cs.Lock around wal.Save test different cases (empty block, non-empty block, ...) comments add comments test 4 cases: empty block, non-empty block, non-empty block with smaller part size, many blocks fixes as per Bucky's last review reset subscriptions on UnsubscribeAll use a simple counter to track message for which we panicked also, set a smaller part size for all test cases
8 years ago
10 years ago
10 years ago
10 years ago
  1. package blockchain
  2. import (
  3. "bytes"
  4. "errors"
  5. "reflect"
  6. "time"
  7. wire "github.com/tendermint/go-wire"
  8. "github.com/tendermint/tendermint/p2p"
  9. "github.com/tendermint/tendermint/proxy"
  10. sm "github.com/tendermint/tendermint/state"
  11. "github.com/tendermint/tendermint/types"
  12. cmn "github.com/tendermint/tmlibs/common"
  13. "github.com/tendermint/tmlibs/log"
  14. )
  15. const (
  16. // BlockchainChannel is a channel for blocks and status updates (`BlockStore` height)
  17. BlockchainChannel = byte(0x40)
  18. defaultChannelCapacity = 1000
  19. trySyncIntervalMS = 50
  20. // stop syncing when last block's time is
  21. // within this much of the system time.
  22. // stopSyncingDurationMinutes = 10
  23. // ask for best height every 10s
  24. statusUpdateIntervalSeconds = 10
  25. // check if we should switch to consensus reactor
  26. switchToConsensusIntervalSeconds = 1
  27. )
  28. type consensusReactor interface {
  29. // for when we switch from blockchain reactor and fast sync to
  30. // the consensus machine
  31. SwitchToConsensus(*sm.State, int)
  32. }
  33. // BlockchainReactor handles long-term catchup syncing.
  34. type BlockchainReactor struct {
  35. p2p.BaseReactor
  36. state *sm.State
  37. proxyAppConn proxy.AppConnConsensus // same as consensus.proxyAppConn
  38. store *BlockStore
  39. pool *BlockPool
  40. fastSync bool
  41. requestsCh chan BlockRequest
  42. timeoutsCh chan string
  43. eventBus *types.EventBus
  44. }
  45. // NewBlockchainReactor returns new reactor instance.
  46. func NewBlockchainReactor(state *sm.State, proxyAppConn proxy.AppConnConsensus, store *BlockStore, fastSync bool) *BlockchainReactor {
  47. if state.LastBlockHeight == store.Height()-1 {
  48. store.height-- // XXX HACK, make this better
  49. }
  50. if state.LastBlockHeight != store.Height() {
  51. cmn.PanicSanity(cmn.Fmt("state (%v) and store (%v) height mismatch", state.LastBlockHeight, store.Height()))
  52. }
  53. requestsCh := make(chan BlockRequest, defaultChannelCapacity)
  54. timeoutsCh := make(chan string, defaultChannelCapacity)
  55. pool := NewBlockPool(
  56. store.Height()+1,
  57. requestsCh,
  58. timeoutsCh,
  59. )
  60. bcR := &BlockchainReactor{
  61. state: state,
  62. proxyAppConn: proxyAppConn,
  63. store: store,
  64. pool: pool,
  65. fastSync: fastSync,
  66. requestsCh: requestsCh,
  67. timeoutsCh: timeoutsCh,
  68. }
  69. bcR.BaseReactor = *p2p.NewBaseReactor("BlockchainReactor", bcR)
  70. return bcR
  71. }
  72. // SetLogger implements cmn.Service by setting the logger on reactor and pool.
  73. func (bcR *BlockchainReactor) SetLogger(l log.Logger) {
  74. bcR.BaseService.Logger = l
  75. bcR.pool.Logger = l
  76. }
  77. // OnStart implements cmn.Service.
  78. func (bcR *BlockchainReactor) OnStart() error {
  79. bcR.BaseReactor.OnStart()
  80. if bcR.fastSync {
  81. _, err := bcR.pool.Start()
  82. if err != nil {
  83. return err
  84. }
  85. go bcR.poolRoutine()
  86. }
  87. return nil
  88. }
  89. // OnStop implements cmn.Service.
  90. func (bcR *BlockchainReactor) OnStop() {
  91. bcR.BaseReactor.OnStop()
  92. bcR.pool.Stop()
  93. }
  94. // GetChannels implements Reactor
  95. func (bcR *BlockchainReactor) GetChannels() []*p2p.ChannelDescriptor {
  96. return []*p2p.ChannelDescriptor{
  97. &p2p.ChannelDescriptor{
  98. ID: BlockchainChannel,
  99. Priority: 10,
  100. SendQueueCapacity: 1000,
  101. },
  102. }
  103. }
  104. // AddPeer implements Reactor by sending our state to peer.
  105. func (bcR *BlockchainReactor) AddPeer(peer p2p.Peer) {
  106. if !peer.Send(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}}) {
  107. // doing nothing, will try later in `poolRoutine`
  108. }
  109. // peer is added to the pool once we receive the first
  110. // bcStatusResponseMessage from the peer and call pool.SetPeerHeight
  111. }
  112. // RemovePeer implements Reactor by removing peer from the pool.
  113. func (bcR *BlockchainReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
  114. bcR.pool.RemovePeer(peer.Key())
  115. }
  116. // respondToPeer loads a block and sends it to the requesting peer,
  117. // if we have it. Otherwise, we'll respond saying we don't have it.
  118. // According to the Tendermint spec, if all nodes are honest,
  119. // no node should be requesting for a block that's non-existent.
  120. func (bcR *BlockchainReactor) respondToPeer(msg *bcBlockRequestMessage, src p2p.Peer) (queued bool) {
  121. block := bcR.store.LoadBlock(msg.Height)
  122. if block != nil {
  123. msg := &bcBlockResponseMessage{Block: block}
  124. return src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{msg})
  125. }
  126. bcR.Logger.Info("Peer asking for a block we don't have", "src", src, "height", msg.Height)
  127. return src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{
  128. &bcNoBlockResponseMessage{Height: msg.Height},
  129. })
  130. }
  131. // Receive implements Reactor by handling 4 types of messages (look below).
  132. func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
  133. _, msg, err := DecodeMessage(msgBytes, bcR.maxMsgSize())
  134. if err != nil {
  135. bcR.Logger.Error("Error decoding message", "err", err)
  136. return
  137. }
  138. bcR.Logger.Debug("Receive", "src", src, "chID", chID, "msg", msg)
  139. // TODO: improve logic to satisfy megacheck
  140. switch msg := msg.(type) {
  141. case *bcBlockRequestMessage:
  142. if queued := bcR.respondToPeer(msg, src); !queued {
  143. // Unfortunately not queued since the queue is full.
  144. }
  145. case *bcBlockResponseMessage:
  146. // Got a block.
  147. bcR.pool.AddBlock(src.Key(), msg.Block, len(msgBytes))
  148. case *bcStatusRequestMessage:
  149. // Send peer our state.
  150. queued := src.TrySend(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusResponseMessage{bcR.store.Height()}})
  151. if !queued {
  152. // sorry
  153. }
  154. case *bcStatusResponseMessage:
  155. // Got a peer status. Unverified.
  156. bcR.pool.SetPeerHeight(src.Key(), msg.Height)
  157. default:
  158. bcR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg)))
  159. }
  160. }
  161. // maxMsgSize returns the maximum allowable size of a
  162. // message on the blockchain reactor.
  163. func (bcR *BlockchainReactor) maxMsgSize() int {
  164. return bcR.state.Params.BlockSizeParams.MaxBytes + 2
  165. }
  166. // Handle messages from the poolReactor telling the reactor what to do.
  167. // NOTE: Don't sleep in the FOR_LOOP or otherwise slow it down!
  168. // (Except for the SYNC_LOOP, which is the primary purpose and must be synchronous.)
  169. func (bcR *BlockchainReactor) poolRoutine() {
  170. trySyncTicker := time.NewTicker(trySyncIntervalMS * time.Millisecond)
  171. statusUpdateTicker := time.NewTicker(statusUpdateIntervalSeconds * time.Second)
  172. switchToConsensusTicker := time.NewTicker(switchToConsensusIntervalSeconds * time.Second)
  173. blocksSynced := 0
  174. chainID := bcR.state.ChainID
  175. lastHundred := time.Now()
  176. lastRate := 0.0
  177. FOR_LOOP:
  178. for {
  179. select {
  180. case request := <-bcR.requestsCh: // chan BlockRequest
  181. peer := bcR.Switch.Peers().Get(request.PeerID)
  182. if peer == nil {
  183. continue FOR_LOOP // Peer has since been disconnected.
  184. }
  185. msg := &bcBlockRequestMessage{request.Height}
  186. queued := peer.TrySend(BlockchainChannel, struct{ BlockchainMessage }{msg})
  187. if !queued {
  188. // We couldn't make the request, send-queue full.
  189. // The pool handles timeouts, just let it go.
  190. continue FOR_LOOP
  191. }
  192. case peerID := <-bcR.timeoutsCh: // chan string
  193. // Peer timed out.
  194. peer := bcR.Switch.Peers().Get(peerID)
  195. if peer != nil {
  196. bcR.Switch.StopPeerForError(peer, errors.New("BlockchainReactor Timeout"))
  197. }
  198. case <-statusUpdateTicker.C:
  199. // ask for status updates
  200. go bcR.BroadcastStatusRequest()
  201. case <-switchToConsensusTicker.C:
  202. height, numPending, lenRequesters := bcR.pool.GetStatus()
  203. outbound, inbound, _ := bcR.Switch.NumPeers()
  204. bcR.Logger.Debug("Consensus ticker", "numPending", numPending, "total", lenRequesters,
  205. "outbound", outbound, "inbound", inbound)
  206. if bcR.pool.IsCaughtUp() {
  207. bcR.Logger.Info("Time to switch to consensus reactor!", "height", height)
  208. bcR.pool.Stop()
  209. conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
  210. conR.SwitchToConsensus(bcR.state, blocksSynced)
  211. break FOR_LOOP
  212. }
  213. case <-trySyncTicker.C: // chan time
  214. // This loop can be slow as long as it's doing syncing work.
  215. SYNC_LOOP:
  216. for i := 0; i < 10; i++ {
  217. // See if there are any blocks to sync.
  218. first, second := bcR.pool.PeekTwoBlocks()
  219. //bcR.Logger.Info("TrySync peeked", "first", first, "second", second)
  220. if first == nil || second == nil {
  221. // We need both to sync the first block.
  222. break SYNC_LOOP
  223. }
  224. firstParts := first.MakePartSet(bcR.state.Params.BlockPartSizeBytes)
  225. firstPartsHeader := firstParts.Header()
  226. // Finally, verify the first block using the second's commit
  227. // NOTE: we can probably make this more efficient, but note that calling
  228. // first.Hash() doesn't verify the tx contents, so MakePartSet() is
  229. // currently necessary.
  230. err := bcR.state.Validators.VerifyCommit(
  231. chainID, types.BlockID{first.Hash(), firstPartsHeader}, first.Height, second.LastCommit)
  232. if err != nil {
  233. bcR.Logger.Error("Error in validation", "err", err)
  234. bcR.pool.RedoRequest(first.Height)
  235. break SYNC_LOOP
  236. } else {
  237. bcR.pool.PopRequest()
  238. bcR.store.SaveBlock(first, firstParts, second.LastCommit)
  239. // TODO: should we be firing events? need to fire NewBlock events manually ...
  240. // NOTE: we could improve performance if we
  241. // didn't make the app commit to disk every block
  242. // ... but we would need a way to get the hash without it persisting
  243. err := bcR.state.ApplyBlock(bcR.eventBus, bcR.proxyAppConn, first, firstPartsHeader, types.MockMempool{})
  244. if err != nil {
  245. // TODO This is bad, are we zombie?
  246. cmn.PanicQ(cmn.Fmt("Failed to process committed block (%d:%X): %v", first.Height, first.Hash(), err))
  247. }
  248. blocksSynced += 1
  249. if blocksSynced%100 == 0 {
  250. lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
  251. bcR.Logger.Info("Fast Sync Rate", "height", bcR.pool.height,
  252. "max_peer_height", bcR.pool.MaxPeerHeight(), "blocks/s", lastRate)
  253. lastHundred = time.Now()
  254. }
  255. }
  256. }
  257. continue FOR_LOOP
  258. case <-bcR.Quit:
  259. break FOR_LOOP
  260. }
  261. }
  262. }
  263. // BroadcastStatusRequest broadcasts `BlockStore` height.
  264. func (bcR *BlockchainReactor) BroadcastStatusRequest() error {
  265. bcR.Switch.Broadcast(BlockchainChannel, struct{ BlockchainMessage }{&bcStatusRequestMessage{bcR.store.Height()}})
  266. return nil
  267. }
  268. // SetEventBus sets event bus.
  269. func (bcR *BlockchainReactor) SetEventBus(b *types.EventBus) {
  270. bcR.eventBus = b
  271. }
  272. //-----------------------------------------------------------------------------
  273. // Messages
  274. const (
  275. msgTypeBlockRequest = byte(0x10)
  276. msgTypeBlockResponse = byte(0x11)
  277. msgTypeNoBlockResponse = byte(0x12)
  278. msgTypeStatusResponse = byte(0x20)
  279. msgTypeStatusRequest = byte(0x21)
  280. )
  281. // BlockchainMessage is a generic message for this reactor.
  282. type BlockchainMessage interface{}
  283. var _ = wire.RegisterInterface(
  284. struct{ BlockchainMessage }{},
  285. wire.ConcreteType{&bcBlockRequestMessage{}, msgTypeBlockRequest},
  286. wire.ConcreteType{&bcBlockResponseMessage{}, msgTypeBlockResponse},
  287. wire.ConcreteType{&bcNoBlockResponseMessage{}, msgTypeNoBlockResponse},
  288. wire.ConcreteType{&bcStatusResponseMessage{}, msgTypeStatusResponse},
  289. wire.ConcreteType{&bcStatusRequestMessage{}, msgTypeStatusRequest},
  290. )
  291. // DecodeMessage decodes BlockchainMessage.
  292. // TODO: ensure that bz is completely read.
  293. func DecodeMessage(bz []byte, maxSize int) (msgType byte, msg BlockchainMessage, err error) {
  294. msgType = bz[0]
  295. n := int(0)
  296. r := bytes.NewReader(bz)
  297. msg = wire.ReadBinary(struct{ BlockchainMessage }{}, r, maxSize, &n, &err).(struct{ BlockchainMessage }).BlockchainMessage
  298. if err != nil && n != len(bz) {
  299. err = errors.New("DecodeMessage() had bytes left over")
  300. }
  301. return
  302. }
  303. //-------------------------------------
  304. type bcBlockRequestMessage struct {
  305. Height int
  306. }
  307. func (m *bcBlockRequestMessage) String() string {
  308. return cmn.Fmt("[bcBlockRequestMessage %v]", m.Height)
  309. }
  310. type bcNoBlockResponseMessage struct {
  311. Height int
  312. }
  313. func (brm *bcNoBlockResponseMessage) String() string {
  314. return cmn.Fmt("[bcNoBlockResponseMessage %d]", brm.Height)
  315. }
  316. //-------------------------------------
  317. // NOTE: keep up-to-date with maxBlockchainResponseSize
  318. type bcBlockResponseMessage struct {
  319. Block *types.Block
  320. }
  321. func (m *bcBlockResponseMessage) String() string {
  322. return cmn.Fmt("[bcBlockResponseMessage %v]", m.Block.Height)
  323. }
  324. //-------------------------------------
  325. type bcStatusRequestMessage struct {
  326. Height int
  327. }
  328. func (m *bcStatusRequestMessage) String() string {
  329. return cmn.Fmt("[bcStatusRequestMessage %v]", m.Height)
  330. }
  331. //-------------------------------------
  332. type bcStatusResponseMessage struct {
  333. Height int
  334. }
  335. func (m *bcStatusResponseMessage) String() string {
  336. return cmn.Fmt("[bcStatusResponseMessage %v]", m.Height)
  337. }