You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

390 lines
14 KiB

7 years ago
7 years ago
7 years ago
cli: debug sub-command (#4227) ## Issue Implement a new subcommand: tendermint debug. This subcommand itself has two subcommands: $ tendermint debug kill <pid> </path/to/out.zip> --home=</path/to/app.d> Writes debug info into a compressed archive. The archive will contain the following: ├── config.toml ├── consensus_state.json ├── net_info.json ├── stacktrace.out ├── status.json └── wal The Tendermint process will be killed. $ tendermint debug dump </path/to/out> --home=</path/to/app.d> This command will perform similar to kill except it only polls the node and dumps debugging data every frequency seconds to a compressed archive under a given destination directory. Each archive will contain: ├── consensus_state.json ├── goroutine.out ├── heap.out ├── net_info.json ├── status.json └── wal Note: goroutine.out and heap.out will only be written if a profile address is provided and is operational. This command is blocking and will log any error. replaces: #3327 closes: #3249 ## Commits: * Implement debug tool command stubs * Implement net getters and zip logic * Update zip dir API and add home flag * Add simple godocs for kill aux functions * Move IO util to new file and implement copy WAL func * Implement copy config function * Implement killProc * Remove debug fmt * Validate output file input * Direct STDERR to file * Godoc updates * Sleep prior to killing tail proc * Minor cleanup of godocs * Move debug command and add make target * Rename command handler function * Add example to command long descr * Move current implementation to cmd/tendermint/commands/debug * Update kill cmd long description * Implement dump command * Add pending log entry * Add gosec nolint * Add error check for Mkdir * Add os.IsNotExist(err) * Add to debugging section in running-in-prod doc
5 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
  1. ---
  2. order: 5
  3. ---
  4. # Running in production
  5. ## Database
  6. By default, Tendermint uses the `syndtr/goleveldb` package for its in-process
  7. key-value database. Unfortunately, this implementation of LevelDB seems to suffer under heavy load (see
  8. [#226](https://github.com/syndtr/goleveldb/issues/226)). It may be best to
  9. install the real C-implementation of LevelDB and compile Tendermint to use
  10. that using `make build_c`. See the [install instructions](../introduction/install.md) for details.
  11. Tendermint keeps multiple distinct databases in the `$TMROOT/data`:
  12. - `blockstore.db`: Keeps the entire blockchain - stores blocks,
  13. block commits, and block meta data, each indexed by height. Used to sync new
  14. peers.
  15. - `evidence.db`: Stores all verified evidence of misbehaviour.
  16. - `state.db`: Stores the current blockchain state (ie. height, validators,
  17. consensus params). Only grows if consensus params or validators change. Also
  18. used to temporarily store intermediate results during block processing.
  19. - `tx_index.db`: Indexes txs (and their results) by tx hash and by DeliverTx result events.
  20. By default, Tendermint will only index txs by their hash, not by their DeliverTx
  21. result events. See [indexing transactions](../app-dev/indexing-transactions.md) for
  22. details.
  23. There is no current strategy for pruning the databases. Consider reducing
  24. block production by [controlling empty blocks](../tendermint-core/using-tendermint.md#no-empty-blocks)
  25. or by increasing the `consensus.timeout_commit` param. Note both of these are
  26. local settings and not enforced by the consensus.
  27. We're working on [state
  28. syncing](https://github.com/tendermint/tendermint/issues/828),
  29. which will enable history to be thrown away
  30. and recent application state to be directly synced. We'll need to develop solutions
  31. for archival nodes that allow queries on historical transactions and states.
  32. The Cosmos project has had much success just dumping the latest state of a
  33. blockchain to disk and starting a new chain from that state.
  34. ## Logging
  35. Default logging level (`main:info,state:info,*:`) should suffice for
  36. normal operation mode. Read [this
  37. post](https://blog.cosmos.network/one-of-the-exciting-new-features-in-0-10-0-release-is-smart-log-level-flag-e2506b4ab756)
  38. for details on how to configure `log_level` config variable. Some of the
  39. modules can be found [here](./how-to-read-logs.md#list-of-modules). If
  40. you're trying to debug Tendermint or asked to provide logs with debug
  41. logging level, you can do so by running tendermint with
  42. `--log_level="*:debug"`.
  43. ## Write Ahead Logs (WAL)
  44. Tendermint uses write ahead logs for the consensus (`cs.wal`) and the mempool
  45. (`mempool.wal`). Both WALs have a max size of 1GB and are automatically rotated.
  46. ### Consensus WAL
  47. The `consensus.wal` is used to ensure we can recover from a crash at any point
  48. in the consensus state machine.
  49. It writes all consensus messages (timeouts, proposals, block part, or vote)
  50. to a single file, flushing to disk before processing messages from its own
  51. validator. Since Tendermint validators are expected to never sign a conflicting vote, the
  52. WAL ensures we can always recover deterministically to the latest state of the consensus without
  53. using the network or re-signing any consensus messages.
  54. If your `consensus.wal` is corrupted, see [below](#wal-corruption).
  55. ### Mempool WAL
  56. The `mempool.wal` logs all incoming txs before running CheckTx, but is
  57. otherwise not used in any programmatic way. It's just a kind of manual
  58. safe guard. Note the mempool provides no durability guarantees - a tx sent to one or many nodes
  59. may never make it into the blockchain if those nodes crash before being able to
  60. propose it. Clients must monitor their txs by subscribing over websockets,
  61. polling for them, or using `/broadcast_tx_commit`. In the worst case, txs can be
  62. resent from the mempool WAL manually.
  63. For the above reasons, the `mempool.wal` is disabled by default. To enable, set
  64. `mempool.wal_dir` to where you want the WAL to be located (e.g.
  65. `data/mempool.wal`).
  66. ## DOS Exposure and Mitigation
  67. Validators are supposed to setup [Sentry Node
  68. Architecture](https://blog.cosmos.network/tendermint-explained-bringing-bft-based-pos-to-the-public-blockchain-domain-f22e274a0fdb)
  69. to prevent Denial-of-service attacks. You can read more about it
  70. [here](../interviews/tendermint-bft.md).
  71. ### P2P
  72. The core of the Tendermint peer-to-peer system is `MConnection`. Each
  73. connection has `MaxPacketMsgPayloadSize`, which is the maximum packet
  74. size and bounded send & receive queues. One can impose restrictions on
  75. send & receive rate per connection (`SendRate`, `RecvRate`).
  76. ### RPC
  77. Endpoints returning multiple entries are limited by default to return 30
  78. elements (100 max). See the [RPC Documentation](https://tendermint.com/rpc/)
  79. for more information.
  80. Rate-limiting and authentication are another key aspects to help protect
  81. against DOS attacks. While in the future we may implement these
  82. features, for now, validators are supposed to use external tools like
  83. [NGINX](https://www.nginx.com/blog/rate-limiting-nginx/) or
  84. [traefik](https://docs.traefik.io/configuration/commons/#rate-limiting)
  85. to achieve the same things.
  86. ## Debugging Tendermint
  87. If you ever have to debug Tendermint, the first thing you should
  88. probably do is to check out the logs. See [How to read
  89. logs](./how-to-read-logs.md), where we explain what certain log
  90. statements mean.
  91. If, after skimming through the logs, things are not clear still, the
  92. next thing to try is query the /status RPC endpoint. It provides the
  93. necessary info: whenever the node is syncing or not, what height it is
  94. on, etc.
  95. ```
  96. curl http(s)://{ip}:{rpcPort}/status
  97. ```
  98. `dump_consensus_state` will give you a detailed overview of the
  99. consensus state (proposer, lastest validators, peers states). From it,
  100. you should be able to figure out why, for example, the network had
  101. halted.
  102. ```
  103. curl http(s)://{ip}:{rpcPort}/dump_consensus_state
  104. ```
  105. There is a reduced version of this endpoint - `consensus_state`, which
  106. returns just the votes seen at the current height.
  107. - [Github Issues](https://github.com/tendermint/tendermint/issues)
  108. - [StackOverflow
  109. questions](https://stackoverflow.com/questions/tagged/tendermint)
  110. ### Debug Utility
  111. Tendermint also ships with a `debug` sub-command that allows you to kill a live
  112. Tendermint process while collecting useful information in a compressed archive
  113. such as the configuration used, consensus state, network state, the node' status,
  114. the WAL, and even the stacktrace of the process before exit. These files can be
  115. useful to examine when debugging a faulty Tendermint process.
  116. In addition, the `debug` sub-command also allows you to dump debugging data into
  117. compressed archives at a regular interval. These archives contain the goroutine
  118. and heap profiles in addition to the consensus state, network info, node status,
  119. and even the WAL.
  120. ## Monitoring Tendermint
  121. Each Tendermint instance has a standard `/health` RPC endpoint, which
  122. responds with 200 (OK) if everything is fine and 500 (or no response) -
  123. if something is wrong.
  124. Other useful endpoints include mentioned earlier `/status`, `/net_info` and
  125. `/validators`.
  126. Tendermint also can report and serve Prometheus metrics. See
  127. [Metrics](./metrics.md).
  128. ## What happens when my app dies?
  129. You are supposed to run Tendermint under a [process
  130. supervisor](https://en.wikipedia.org/wiki/Process_supervision) (like
  131. systemd or runit). It will ensure Tendermint is always running (despite
  132. possible errors).
  133. Getting back to the original question, if your application dies,
  134. Tendermint will panic. After a process supervisor restarts your
  135. application, Tendermint should be able to reconnect successfully. The
  136. order of restart does not matter for it.
  137. ## Signal handling
  138. We catch SIGINT and SIGTERM and try to clean up nicely. For other
  139. signals we use the default behaviour in Go: [Default behavior of signals
  140. in Go
  141. programs](https://golang.org/pkg/os/signal/#hdr-Default_behavior_of_signals_in_Go_programs).
  142. ## Corruption
  143. **NOTE:** Make sure you have a backup of the Tendermint data directory.
  144. ### Possible causes
  145. Remember that most corruption is caused by hardware issues:
  146. - RAID controllers with faulty / worn out battery backup, and an unexpected power loss
  147. - Hard disk drives with write-back cache enabled, and an unexpected power loss
  148. - Cheap SSDs with insufficient power-loss protection, and an unexpected power-loss
  149. - Defective RAM
  150. - Defective or overheating CPU(s)
  151. Other causes can be:
  152. - Database systems configured with fsync=off and an OS crash or power loss
  153. - Filesystems configured to use write barriers plus a storage layer that ignores write barriers. LVM is a particular culprit.
  154. - Tendermint bugs
  155. - Operating system bugs
  156. - Admin error (e.g., directly modifying Tendermint data-directory contents)
  157. (Source: https://wiki.postgresql.org/wiki/Corruption)
  158. ### WAL Corruption
  159. If consensus WAL is corrupted at the lastest height and you are trying to start
  160. Tendermint, replay will fail with panic.
  161. Recovering from data corruption can be hard and time-consuming. Here are two approaches you can take:
  162. 1. Delete the WAL file and restart Tendermint. It will attempt to sync with other peers.
  163. 2. Try to repair the WAL file manually:
  164. 1) Create a backup of the corrupted WAL file:
  165. ```
  166. cp "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal_backup
  167. ```
  168. 2. Use `./scripts/wal2json` to create a human-readable version
  169. ```
  170. ./scripts/wal2json/wal2json "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal
  171. ```
  172. 3. Search for a "CORRUPTED MESSAGE" line.
  173. 4. By looking at the previous message and the message after the corrupted one
  174. and looking at the logs, try to rebuild the message. If the consequent
  175. messages are marked as corrupted too (this may happen if length header
  176. got corrupted or some writes did not make it to the WAL ~ truncation),
  177. then remove all the lines starting from the corrupted one and restart
  178. Tendermint.
  179. ```
  180. $EDITOR /tmp/corrupted_wal
  181. ```
  182. 5. After editing, convert this file back into binary form by running:
  183. ```
  184. ./scripts/json2wal/json2wal /tmp/corrupted_wal $TMHOME/data/cs.wal/wal
  185. ```
  186. ## Hardware
  187. ### Processor and Memory
  188. While actual specs vary depending on the load and validators count,
  189. minimal requirements are:
  190. - 1GB RAM
  191. - 25GB of disk space
  192. - 1.4 GHz CPU
  193. SSD disks are preferable for applications with high transaction
  194. throughput.
  195. Recommended:
  196. - 2GB RAM
  197. - 100GB SSD
  198. - x64 2.0 GHz 2v CPU
  199. While for now, Tendermint stores all the history and it may require
  200. significant disk space over time, we are planning to implement state
  201. syncing (See
  202. [this issue](https://github.com/tendermint/tendermint/issues/828)). So,
  203. storing all the past blocks will not be necessary.
  204. ### Operating Systems
  205. Tendermint can be compiled for a wide range of operating systems thanks
  206. to Go language (the list of \$OS/\$ARCH pairs can be found
  207. [here](https://golang.org/doc/install/source#environment)).
  208. While we do not favor any operation system, more secure and stable Linux
  209. server distributions (like Centos) should be preferred over desktop
  210. operation systems (like Mac OS).
  211. ### Miscellaneous
  212. NOTE: if you are going to use Tendermint in a public domain, make sure
  213. you read [hardware recommendations](https://cosmos.network/validators) for a validator in the
  214. Cosmos network.
  215. ## Configuration parameters
  216. - `p2p.flush_throttle_timeout`
  217. - `p2p.max_packet_msg_payload_size`
  218. - `p2p.send_rate`
  219. - `p2p.recv_rate`
  220. If you are going to use Tendermint in a private domain and you have a
  221. private high-speed network among your peers, it makes sense to lower
  222. flush throttle timeout and increase other params.
  223. ```
  224. [p2p]
  225. send_rate=20000000 # 2MB/s
  226. recv_rate=20000000 # 2MB/s
  227. flush_throttle_timeout=10
  228. max_packet_msg_payload_size=10240 # 10KB
  229. ```
  230. - `mempool.recheck`
  231. After every block, Tendermint rechecks every transaction left in the
  232. mempool to see if transactions committed in that block affected the
  233. application state, so some of the transactions left may become invalid.
  234. If that does not apply to your application, you can disable it by
  235. setting `mempool.recheck=false`.
  236. - `mempool.broadcast`
  237. Setting this to false will stop the mempool from relaying transactions
  238. to other peers until they are included in a block. It means only the
  239. peer you send the tx to will see it until it is included in a block.
  240. - `consensus.skip_timeout_commit`
  241. We want `skip_timeout_commit=false` when there is economics on the line
  242. because proposers should wait to hear for more votes. But if you don't
  243. care about that and want the fastest consensus, you can skip it. It will
  244. be kept false by default for public deployments (e.g. [Cosmos
  245. Hub](https://cosmos.network/intro/hub)) while for enterprise
  246. applications, setting it to true is not a problem.
  247. - `consensus.peer_gossip_sleep_duration`
  248. You can try to reduce the time your node sleeps before checking if
  249. theres something to send its peers.
  250. - `consensus.timeout_commit`
  251. You can also try lowering `timeout_commit` (time we sleep before
  252. proposing the next block).
  253. - `p2p.addr_book_strict`
  254. By default, Tendermint checks whenever a peer's address is routable before
  255. saving it to the address book. The address is considered as routable if the IP
  256. is [valid and within allowed
  257. ranges](https://github.com/tendermint/tendermint/blob/27bd1deabe4ba6a2d9b463b8f3e3f1e31b993e61/p2p/netaddress.go#L209).
  258. This may not be the case for private or local networks, where your IP range is usually
  259. strictly limited and private. If that case, you need to set `addr_book_strict`
  260. to `false` (turn it off).
  261. - `rpc.max_open_connections`
  262. By default, the number of simultaneous connections is limited because most OS
  263. give you limited number of file descriptors.
  264. If you want to accept greater number of connections, you will need to increase
  265. these limits.
  266. [Sysctls to tune the system to be able to open more connections](https://github.com/satori-com/tcpkali/blob/master/doc/tcpkali.man.md#sysctls-to-tune-the-system-to-be-able-to-open-more-connections)
  267. ...for N connections, such as 50k:
  268. ```
  269. kern.maxfiles=10000+2*N # BSD
  270. kern.maxfilesperproc=100+2*N # BSD
  271. kern.ipc.maxsockets=10000+2*N # BSD
  272. fs.file-max=10000+2*N # Linux
  273. net.ipv4.tcp_max_orphans=N # Linux
  274. # For load-generating clients.
  275. net.ipv4.ip_local_port_range="10000 65535" # Linux.
  276. net.inet.ip.portrange.first=10000 # BSD/Mac.
  277. net.inet.ip.portrange.last=65535 # (Enough for N < 55535)
  278. net.ipv4.tcp_tw_reuse=1 # Linux
  279. net.inet.tcp.maxtcptw=2*N # BSD
  280. # If using netfilter on Linux:
  281. net.netfilter.nf_conntrack_max=N
  282. echo $((N/8)) > /sys/module/nf_conntrack/parameters/hashsize
  283. ```
  284. The similar option exists for limiting the number of gRPC connections -
  285. `rpc.grpc_max_open_connections`.