You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

390 lines
14 KiB

6 years ago
6 years ago
cli: debug sub-command (#4227) ## Issue Implement a new subcommand: tendermint debug. This subcommand itself has two subcommands: $ tendermint debug kill <pid> </path/to/out.zip> --home=</path/to/app.d> Writes debug info into a compressed archive. The archive will contain the following: ├── config.toml ├── consensus_state.json ├── net_info.json ├── stacktrace.out ├── status.json └── wal The Tendermint process will be killed. $ tendermint debug dump </path/to/out> --home=</path/to/app.d> This command will perform similar to kill except it only polls the node and dumps debugging data every frequency seconds to a compressed archive under a given destination directory. Each archive will contain: ├── consensus_state.json ├── goroutine.out ├── heap.out ├── net_info.json ├── status.json └── wal Note: goroutine.out and heap.out will only be written if a profile address is provided and is operational. This command is blocking and will log any error. replaces: #3327 closes: #3249 ## Commits: * Implement debug tool command stubs * Implement net getters and zip logic * Update zip dir API and add home flag * Add simple godocs for kill aux functions * Move IO util to new file and implement copy WAL func * Implement copy config function * Implement killProc * Remove debug fmt * Validate output file input * Direct STDERR to file * Godoc updates * Sleep prior to killing tail proc * Minor cleanup of godocs * Move debug command and add make target * Rename command handler function * Add example to command long descr * Move current implementation to cmd/tendermint/commands/debug * Update kill cmd long description * Implement dump command * Add pending log entry * Add gosec nolint * Add error check for Mkdir * Add os.IsNotExist(err) * Add to debugging section in running-in-prod doc
5 years ago
cli: debug sub-command (#4227) ## Issue Implement a new subcommand: tendermint debug. This subcommand itself has two subcommands: $ tendermint debug kill <pid> </path/to/out.zip> --home=</path/to/app.d> Writes debug info into a compressed archive. The archive will contain the following: ├── config.toml ├── consensus_state.json ├── net_info.json ├── stacktrace.out ├── status.json └── wal The Tendermint process will be killed. $ tendermint debug dump </path/to/out> --home=</path/to/app.d> This command will perform similar to kill except it only polls the node and dumps debugging data every frequency seconds to a compressed archive under a given destination directory. Each archive will contain: ├── consensus_state.json ├── goroutine.out ├── heap.out ├── net_info.json ├── status.json └── wal Note: goroutine.out and heap.out will only be written if a profile address is provided and is operational. This command is blocking and will log any error. replaces: #3327 closes: #3249 ## Commits: * Implement debug tool command stubs * Implement net getters and zip logic * Update zip dir API and add home flag * Add simple godocs for kill aux functions * Move IO util to new file and implement copy WAL func * Implement copy config function * Implement killProc * Remove debug fmt * Validate output file input * Direct STDERR to file * Godoc updates * Sleep prior to killing tail proc * Minor cleanup of godocs * Move debug command and add make target * Rename command handler function * Add example to command long descr * Move current implementation to cmd/tendermint/commands/debug * Update kill cmd long description * Implement dump command * Add pending log entry * Add gosec nolint * Add error check for Mkdir * Add os.IsNotExist(err) * Add to debugging section in running-in-prod doc
5 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
  1. ---
  2. order: 4
  3. ---
  4. # Running in production
  5. ## Database
  6. By default, Tendermint uses the `syndtr/goleveldb` package for its in-process
  7. key-value database.
  8. Tendermint keeps multiple distinct databases in the `$TMROOT/data`:
  9. - `blockstore.db`: Keeps the entire blockchain - stores blocks,
  10. block commits, and block meta data, each indexed by height. Used to sync new
  11. peers.
  12. - `evidence.db`: Stores all verified evidence of misbehaviour.
  13. - `state.db`: Stores the current blockchain state (ie. height, validators,
  14. consensus params). Only grows if consensus params or validators change. Also
  15. used to temporarily store intermediate results during block processing.
  16. - `tx_index.db`: Indexes txs (and their results) by tx hash and by DeliverTx result events.
  17. By default, Tendermint will only index txs by their hash and height, not by their DeliverTx
  18. result events. See [indexing transactions](../app-dev/indexing-transactions.md) for
  19. details.
  20. Applications can expose block pruning strategies to the node operator. Please read the documentation of your application
  21. to find out more details.
  22. Applications can use [state sync](state-sync.md) to help nodes bootstrap quickly.
  23. ## Logging
  24. Default logging level (`log_level = "main:info,state:info,statesync:info,*:error"`) should suffice for
  25. normal operation mode. Read [this
  26. post](https://blog.cosmos.network/one-of-the-exciting-new-features-in-0-10-0-release-is-smart-log-level-flag-e2506b4ab756)
  27. for details on how to configure `log_level` config variable. Some of the
  28. modules can be found [here](./how-to-read-logs.md#list-of-modules). If
  29. you're trying to debug Tendermint or asked to provide logs with debug
  30. logging level, you can do so by running Tendermint with
  31. `--log_level="*:debug"`.
  32. ## Write Ahead Logs (WAL)
  33. Tendermint uses write ahead logs for the consensus (`cs.wal`) and the mempool
  34. (`mempool.wal`). Both WALs have a max size of 1GB and are automatically rotated.
  35. ### Consensus WAL
  36. The `consensus.wal` is used to ensure we can recover from a crash at any point
  37. in the consensus state machine.
  38. It writes all consensus messages (timeouts, proposals, block part, or vote)
  39. to a single file, flushing to disk before processing messages from its own
  40. validator. Since Tendermint validators are expected to never sign a conflicting vote, the
  41. WAL ensures we can always recover deterministically to the latest state of the consensus without
  42. using the network or re-signing any consensus messages.
  43. If your `consensus.wal` is corrupted, see [below](#wal-corruption).
  44. ### Mempool WAL
  45. The `mempool.wal` logs all incoming txs before running CheckTx, but is
  46. otherwise not used in any programmatic way. It's just a kind of manual
  47. safe guard. Note the mempool provides no durability guarantees - a tx sent to one or many nodes
  48. may never make it into the blockchain if those nodes crash before being able to
  49. propose it. Clients must monitor their txs by subscribing over websockets,
  50. polling for them, or using `/broadcast_tx_commit`. In the worst case, txs can be
  51. resent from the mempool WAL manually.
  52. For the above reasons, the `mempool.wal` is disabled by default. To enable, set
  53. `mempool.wal_dir` to where you want the WAL to be located (e.g.
  54. `data/mempool.wal`).
  55. ## DOS Exposure and Mitigation
  56. Validators are supposed to setup [Sentry Node
  57. Architecture](./validators.md)
  58. to prevent Denial-of-service attacks.
  59. ### P2P
  60. The core of the Tendermint peer-to-peer system is `MConnection`. Each
  61. connection has `MaxPacketMsgPayloadSize`, which is the maximum packet
  62. size and bounded send & receive queues. One can impose restrictions on
  63. send & receive rate per connection (`SendRate`, `RecvRate`).
  64. The number of open P2P connections can become quite large, and hit the operating system's open
  65. file limit (since TCP connections are considered files on UNIX-based systems). Nodes should be
  66. given a sizable open file limit, e.g. 8192, via `ulimit -n 8192` or other deployment-specific
  67. mechanisms.
  68. ### RPC
  69. Endpoints returning multiple entries are limited by default to return 30
  70. elements (100 max). See the [RPC Documentation](https://docs.tendermint.com/master/rpc/)
  71. for more information.
  72. Rate-limiting and authentication are another key aspects to help protect
  73. against DOS attacks. Validators are supposed to use external tools like
  74. [NGINX](https://www.nginx.com/blog/rate-limiting-nginx/) or
  75. [traefik](https://docs.traefik.io/middlewares/ratelimit/)
  76. to achieve the same things.
  77. ## Debugging Tendermint
  78. If you ever have to debug Tendermint, the first thing you should probably do is
  79. check out the logs. See [How to read logs](./how-to-read-logs.md), where we
  80. explain what certain log statements mean.
  81. If, after skimming through the logs, things are not clear still, the next thing
  82. to try is querying the `/status` RPC endpoint. It provides the necessary info:
  83. whenever the node is syncing or not, what height it is on, etc.
  84. ```bash
  85. curl http(s)://{ip}:{rpcPort}/status
  86. ```
  87. `/dump_consensus_state` will give you a detailed overview of the consensus
  88. state (proposer, latest validators, peers states). From it, you should be able
  89. to figure out why, for example, the network had halted.
  90. ```bash
  91. curl http(s)://{ip}:{rpcPort}/dump_consensus_state
  92. ```
  93. There is a reduced version of this endpoint - `/consensus_state`, which returns
  94. just the votes seen at the current height.
  95. If, after consulting with the logs and above endpoints, you still have no idea
  96. what's happening, consider using `tendermint debug kill` sub-command. This
  97. command will scrap all the available info and kill the process. See
  98. [Debugging](../tools/debugging.md) for the exact format.
  99. You can inspect the resulting archive yourself or create an issue on
  100. [Github](https://github.com/tendermint/tendermint). Before opening an issue
  101. however, be sure to check if there's [no existing
  102. issue](https://github.com/tendermint/tendermint/issues) already.
  103. ## Monitoring Tendermint
  104. Each Tendermint instance has a standard `/health` RPC endpoint, which responds
  105. with 200 (OK) if everything is fine and 500 (or no response) - if something is
  106. wrong.
  107. Other useful endpoints include mentioned earlier `/status`, `/net_info` and
  108. `/validators`.
  109. Tendermint also can report and serve Prometheus metrics. See
  110. [Metrics](./metrics.md).
  111. `tendermint debug dump` sub-command can be used to periodically dump useful
  112. information into an archive. See [Debugging](../tools/debugging.md) for more
  113. information.
  114. ## What happens when my app dies
  115. You are supposed to run Tendermint under a [process
  116. supervisor](https://en.wikipedia.org/wiki/Process_supervision) (like
  117. systemd or runit). It will ensure Tendermint is always running (despite
  118. possible errors).
  119. Getting back to the original question, if your application dies,
  120. Tendermint will panic. After a process supervisor restarts your
  121. application, Tendermint should be able to reconnect successfully. The
  122. order of restart does not matter for it.
  123. ## Signal handling
  124. We catch SIGINT and SIGTERM and try to clean up nicely. For other
  125. signals we use the default behavior in Go: [Default behavior of signals
  126. in Go
  127. programs](https://golang.org/pkg/os/signal/#hdr-Default_behavior_of_signals_in_Go_programs).
  128. ## Corruption
  129. **NOTE:** Make sure you have a backup of the Tendermint data directory.
  130. ### Possible causes
  131. Remember that most corruption is caused by hardware issues:
  132. - RAID controllers with faulty / worn out battery backup, and an unexpected power loss
  133. - Hard disk drives with write-back cache enabled, and an unexpected power loss
  134. - Cheap SSDs with insufficient power-loss protection, and an unexpected power-loss
  135. - Defective RAM
  136. - Defective or overheating CPU(s)
  137. Other causes can be:
  138. - Database systems configured with fsync=off and an OS crash or power loss
  139. - Filesystems configured to use write barriers plus a storage layer that ignores write barriers. LVM is a particular culprit.
  140. - Tendermint bugs
  141. - Operating system bugs
  142. - Admin error (e.g., directly modifying Tendermint data-directory contents)
  143. (Source: <https://wiki.postgresql.org/wiki/Corruption>)
  144. ### WAL Corruption
  145. If consensus WAL is corrupted at the latest height and you are trying to start
  146. Tendermint, replay will fail with panic.
  147. Recovering from data corruption can be hard and time-consuming. Here are two approaches you can take:
  148. 1. Delete the WAL file and restart Tendermint. It will attempt to sync with other peers.
  149. 2. Try to repair the WAL file manually:
  150. 1) Create a backup of the corrupted WAL file:
  151. ```sh
  152. cp "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal_backup
  153. ```
  154. 2) Use `./scripts/wal2json` to create a human-readable version:
  155. ```sh
  156. ./scripts/wal2json/wal2json "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal
  157. ```
  158. 3) Search for a "CORRUPTED MESSAGE" line.
  159. 4) By looking at the previous message and the message after the corrupted one
  160. and looking at the logs, try to rebuild the message. If the consequent
  161. messages are marked as corrupted too (this may happen if length header
  162. got corrupted or some writes did not make it to the WAL ~ truncation),
  163. then remove all the lines starting from the corrupted one and restart
  164. Tendermint.
  165. ```sh
  166. $EDITOR /tmp/corrupted_wal
  167. ```
  168. 5) After editing, convert this file back into binary form by running:
  169. ```sh
  170. ./scripts/json2wal/json2wal /tmp/corrupted_wal $TMHOME/data/cs.wal/wal
  171. ```
  172. ## Hardware
  173. ### Processor and Memory
  174. While actual specs vary depending on the load and validators count, minimal
  175. requirements are:
  176. - 1GB RAM
  177. - 25GB of disk space
  178. - 1.4 GHz CPU
  179. SSD disks are preferable for applications with high transaction throughput.
  180. Recommended:
  181. - 2GB RAM
  182. - 100GB SSD
  183. - x64 2.0 GHz 2v CPU
  184. While for now, Tendermint stores all the history and it may require significant
  185. disk space over time, we are planning to implement state syncing (See [this
  186. issue](https://github.com/tendermint/tendermint/issues/828)). So, storing all
  187. the past blocks will not be necessary.
  188. ### Validator signing on 32 bit architectures (or ARM)
  189. Both our `ed25519` and `secp256k1` implementations require constant time
  190. `uint64` multiplication. Non-constant time crypto can (and has) leaked
  191. private keys on both `ed25519` and `secp256k1`. This doesn't exist in hardware
  192. on 32 bit x86 platforms ([source](https://bearssl.org/ctmul.html)), and it
  193. depends on the compiler to enforce that it is constant time. It's unclear at
  194. this point whenever the Golang compiler does this correctly for all
  195. implementations.
  196. **We do not support nor recommend running a validator on 32 bit architectures OR
  197. the "VIA Nano 2000 Series", and the architectures in the ARM section rated
  198. "S-".**
  199. ### Operating Systems
  200. Tendermint can be compiled for a wide range of operating systems thanks to Go
  201. language (the list of \$OS/\$ARCH pairs can be found
  202. [here](https://golang.org/doc/install/source#environment)).
  203. While we do not favor any operation system, more secure and stable Linux server
  204. distributions (like Centos) should be preferred over desktop operation systems
  205. (like Mac OS).
  206. ### Miscellaneous
  207. NOTE: if you are going to use Tendermint in a public domain, make sure
  208. you read [hardware recommendations](https://cosmos.network/validators) for a validator in the
  209. Cosmos network.
  210. ## Configuration parameters
  211. - `p2p.flush_throttle_timeout`
  212. - `p2p.max_packet_msg_payload_size`
  213. - `p2p.send_rate`
  214. - `p2p.recv_rate`
  215. If you are going to use Tendermint in a private domain and you have a
  216. private high-speed network among your peers, it makes sense to lower
  217. flush throttle timeout and increase other params.
  218. ```toml
  219. [p2p]
  220. send_rate=20000000 # 2MB/s
  221. recv_rate=20000000 # 2MB/s
  222. flush_throttle_timeout=10
  223. max_packet_msg_payload_size=10240 # 10KB
  224. ```
  225. - `mempool.recheck`
  226. After every block, Tendermint rechecks every transaction left in the
  227. mempool to see if transactions committed in that block affected the
  228. application state, so some of the transactions left may become invalid.
  229. If that does not apply to your application, you can disable it by
  230. setting `mempool.recheck=false`.
  231. - `mempool.broadcast`
  232. Setting this to false will stop the mempool from relaying transactions
  233. to other peers until they are included in a block. It means only the
  234. peer you send the tx to will see it until it is included in a block.
  235. - `consensus.skip_timeout_commit`
  236. We want `skip_timeout_commit=false` when there is economics on the line
  237. because proposers should wait to hear for more votes. But if you don't
  238. care about that and want the fastest consensus, you can skip it. It will
  239. be kept false by default for public deployments (e.g. [Cosmos
  240. Hub](https://cosmos.network/intro/hub)) while for enterprise
  241. applications, setting it to true is not a problem.
  242. - `consensus.peer_gossip_sleep_duration`
  243. You can try to reduce the time your node sleeps before checking if
  244. theres something to send its peers.
  245. - `consensus.timeout_commit`
  246. You can also try lowering `timeout_commit` (time we sleep before
  247. proposing the next block).
  248. - `p2p.addr_book_strict`
  249. By default, Tendermint checks whenever a peer's address is routable before
  250. saving it to the address book. The address is considered as routable if the IP
  251. is [valid and within allowed
  252. ranges](https://github.com/tendermint/tendermint/blob/27bd1deabe4ba6a2d9b463b8f3e3f1e31b993e61/p2p/netaddress.go#L209).
  253. This may not be the case for private or local networks, where your IP range is usually
  254. strictly limited and private. If that case, you need to set `addr_book_strict`
  255. to `false` (turn it off).
  256. - `rpc.max_open_connections`
  257. By default, the number of simultaneous connections is limited because most OS
  258. give you limited number of file descriptors.
  259. If you want to accept greater number of connections, you will need to increase
  260. these limits.
  261. [Sysctls to tune the system to be able to open more connections](https://github.com/satori-com/tcpkali/blob/master/doc/tcpkali.man.md#sysctls-to-tune-the-system-to-be-able-to-open-more-connections)
  262. The process file limits must also be increased, e.g. via `ulimit -n 8192`.
  263. ...for N connections, such as 50k:
  264. ```md
  265. kern.maxfiles=10000+2*N # BSD
  266. kern.maxfilesperproc=100+2*N # BSD
  267. kern.ipc.maxsockets=10000+2*N # BSD
  268. fs.file-max=10000+2*N # Linux
  269. net.ipv4.tcp_max_orphans=N # Linux
  270. # For load-generating clients.
  271. net.ipv4.ip_local_port_range="10000 65535" # Linux.
  272. net.inet.ip.portrange.first=10000 # BSD/Mac.
  273. net.inet.ip.portrange.last=65535 # (Enough for N < 55535)
  274. net.ipv4.tcp_tw_reuse=1 # Linux
  275. net.inet.tcp.maxtcptw=2*N # BSD
  276. # If using netfilter on Linux:
  277. net.netfilter.nf_conntrack_max=N
  278. echo $((N/8)) > /sys/module/nf_conntrack/parameters/hashsize
  279. ```
  280. The similar option exists for limiting the number of gRPC connections -
  281. `rpc.grpc_max_open_connections`.