You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

282 lines
8.3 KiB

cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
8 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
  1. // Commons for HTTP handling
  2. package server
  3. import (
  4. "bufio"
  5. "encoding/json"
  6. "errors"
  7. "fmt"
  8. "net"
  9. "net/http"
  10. "os"
  11. "runtime/debug"
  12. "strings"
  13. "time"
  14. "golang.org/x/net/netutil"
  15. "github.com/tendermint/tendermint/libs/log"
  16. rpctypes "github.com/tendermint/tendermint/rpc/jsonrpc/types"
  17. )
  18. // Config is a RPC server configuration.
  19. type Config struct {
  20. // see netutil.LimitListener
  21. MaxOpenConnections int
  22. // mirrors http.Server#ReadTimeout
  23. ReadTimeout time.Duration
  24. // mirrors http.Server#WriteTimeout
  25. WriteTimeout time.Duration
  26. // MaxBodyBytes controls the maximum number of bytes the
  27. // server will read parsing the request body.
  28. MaxBodyBytes int64
  29. // mirrors http.Server#MaxHeaderBytes
  30. MaxHeaderBytes int
  31. }
  32. // DefaultConfig returns a default configuration.
  33. func DefaultConfig() *Config {
  34. return &Config{
  35. MaxOpenConnections: 0, // unlimited
  36. ReadTimeout: 10 * time.Second,
  37. WriteTimeout: 10 * time.Second,
  38. MaxBodyBytes: int64(1000000), // 1MB
  39. MaxHeaderBytes: 1 << 20, // same as the net/http default
  40. }
  41. }
  42. // Serve creates a http.Server and calls Serve with the given listener. It
  43. // wraps handler with RecoverAndLogHandler and a handler, which limits the max
  44. // body size to config.MaxBodyBytes.
  45. //
  46. // NOTE: This function blocks - you may want to call it in a go-routine.
  47. func Serve(listener net.Listener, handler http.Handler, logger log.Logger, config *Config) error {
  48. logger.Info(fmt.Sprintf("Starting RPC HTTP server on %s", listener.Addr()))
  49. s := &http.Server{
  50. Handler: RecoverAndLogHandler(maxBytesHandler{h: handler, n: config.MaxBodyBytes}, logger),
  51. ReadTimeout: config.ReadTimeout,
  52. WriteTimeout: config.WriteTimeout,
  53. MaxHeaderBytes: config.MaxHeaderBytes,
  54. }
  55. err := s.Serve(listener)
  56. logger.Info("RPC HTTP server stopped", "err", err)
  57. return err
  58. }
  59. // Serve creates a http.Server and calls ServeTLS with the given listener,
  60. // certFile and keyFile. It wraps handler with RecoverAndLogHandler and a
  61. // handler, which limits the max body size to config.MaxBodyBytes.
  62. //
  63. // NOTE: This function blocks - you may want to call it in a go-routine.
  64. func ServeTLS(
  65. listener net.Listener,
  66. handler http.Handler,
  67. certFile, keyFile string,
  68. logger log.Logger,
  69. config *Config,
  70. ) error {
  71. logger.Info(fmt.Sprintf("Starting RPC HTTPS server on %s (cert: %q, key: %q)",
  72. listener.Addr(), certFile, keyFile))
  73. s := &http.Server{
  74. Handler: RecoverAndLogHandler(maxBytesHandler{h: handler, n: config.MaxBodyBytes}, logger),
  75. ReadTimeout: config.ReadTimeout,
  76. WriteTimeout: config.WriteTimeout,
  77. MaxHeaderBytes: config.MaxHeaderBytes,
  78. }
  79. err := s.ServeTLS(listener, certFile, keyFile)
  80. logger.Error("RPC HTTPS server stopped", "err", err)
  81. return err
  82. }
  83. // WriteRPCResponseHTTPError marshals res as JSON (with indent) and writes it
  84. // to w.
  85. //
  86. // Maps JSON RPC error codes to HTTP Status codes as follows:
  87. //
  88. // HTTP Status code message
  89. // 500 -32700 Parse error.
  90. // 400 -32600 Invalid Request.
  91. // 404 -32601 Method not found.
  92. // 500 -32602 Invalid params.
  93. // 500 -32603 Internal error.
  94. // 500 -32099..-32000 Server error.
  95. //
  96. // source: https://www.jsonrpc.org/historical/json-rpc-over-http.html
  97. func WriteRPCResponseHTTPError(
  98. w http.ResponseWriter,
  99. res rpctypes.RPCResponse,
  100. ) error {
  101. if res.Error == nil {
  102. panic("tried to write http error response without RPC error")
  103. }
  104. jsonBytes, err := json.MarshalIndent(res, "", " ")
  105. if err != nil {
  106. return fmt.Errorf("json marshal: %w", err)
  107. }
  108. var httpCode int
  109. switch res.Error.Code {
  110. case -32600:
  111. httpCode = http.StatusBadRequest
  112. case -32601:
  113. httpCode = http.StatusNotFound
  114. default:
  115. httpCode = http.StatusInternalServerError
  116. }
  117. w.Header().Set("Content-Type", "application/json")
  118. w.WriteHeader(httpCode)
  119. _, err = w.Write(jsonBytes)
  120. return err
  121. }
  122. // WriteRPCResponseHTTP marshals res as JSON (with indent) and writes it to w.
  123. // If the rpc response can be cached, add cache-control to the response header.
  124. func WriteRPCResponseHTTP(w http.ResponseWriter, c bool, res ...rpctypes.RPCResponse) error {
  125. var v interface{}
  126. if len(res) == 1 {
  127. v = res[0]
  128. } else {
  129. v = res
  130. }
  131. jsonBytes, err := json.MarshalIndent(v, "", " ")
  132. if err != nil {
  133. return fmt.Errorf("json marshal: %w", err)
  134. }
  135. w.Header().Set("Content-Type", "application/json")
  136. if c {
  137. w.Header().Set("Cache-Control", "max-age=31536000") // expired after one year
  138. }
  139. w.WriteHeader(200)
  140. _, err = w.Write(jsonBytes)
  141. return err
  142. }
  143. //-----------------------------------------------------------------------------
  144. // RecoverAndLogHandler wraps an HTTP handler, adding error logging.
  145. // If the inner function panics, the outer function recovers, logs, sends an
  146. // HTTP 500 error response.
  147. func RecoverAndLogHandler(handler http.Handler, logger log.Logger) http.Handler {
  148. return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
  149. // Wrap the ResponseWriter to remember the status
  150. rww := &responseWriterWrapper{-1, w}
  151. begin := time.Now()
  152. rww.Header().Set("X-Server-Time", fmt.Sprintf("%v", begin.Unix()))
  153. defer func() {
  154. // Handle any panics in the panic handler below. Does not use the logger, since we want
  155. // to avoid any further panics. However, we try to return a 500, since it otherwise
  156. // defaults to 200 and there is no other way to terminate the connection. If that
  157. // should panic for whatever reason then the Go HTTP server will handle it and
  158. // terminate the connection - panicing is the de-facto and only way to get the Go HTTP
  159. // server to terminate the request and close the connection/stream:
  160. // https://github.com/golang/go/issues/17790#issuecomment-258481416
  161. if e := recover(); e != nil {
  162. fmt.Fprintf(os.Stderr, "Panic during RPC panic recovery: %v\n%v\n", e, string(debug.Stack()))
  163. w.WriteHeader(500)
  164. }
  165. }()
  166. defer func() {
  167. // Send a 500 error if a panic happens during a handler.
  168. // Without this, Chrome & Firefox were retrying aborted ajax requests,
  169. // at least to my localhost.
  170. if e := recover(); e != nil {
  171. // If RPCResponse
  172. if res, ok := e.(rpctypes.RPCResponse); ok {
  173. if wErr := WriteRPCResponseHTTP(rww, false, res); wErr != nil {
  174. logger.Error("failed to write response", "res", res, "err", wErr)
  175. }
  176. } else {
  177. // Panics can contain anything, attempt to normalize it as an error.
  178. var err error
  179. switch e := e.(type) {
  180. case error:
  181. err = e
  182. case string:
  183. err = errors.New(e)
  184. case fmt.Stringer:
  185. err = errors.New(e.String())
  186. default:
  187. }
  188. logger.Error("panic in RPC HTTP handler", "err", e, "stack", string(debug.Stack()))
  189. res := rpctypes.RPCInternalError(rpctypes.JSONRPCIntID(-1), err)
  190. if wErr := WriteRPCResponseHTTPError(rww, res); wErr != nil {
  191. logger.Error("failed to write response", "res", res, "err", wErr)
  192. }
  193. }
  194. }
  195. // Finally, log.
  196. durationMS := time.Since(begin).Nanoseconds() / 1000000
  197. if rww.Status == -1 {
  198. rww.Status = 200
  199. }
  200. logger.Debug("served RPC HTTP response",
  201. "method", r.Method,
  202. "url", r.URL,
  203. "status", rww.Status,
  204. "duration", durationMS,
  205. "remoteAddr", r.RemoteAddr,
  206. )
  207. }()
  208. handler.ServeHTTP(rww, r)
  209. })
  210. }
  211. // Remember the status for logging
  212. type responseWriterWrapper struct {
  213. Status int
  214. http.ResponseWriter
  215. }
  216. func (w *responseWriterWrapper) WriteHeader(status int) {
  217. w.Status = status
  218. w.ResponseWriter.WriteHeader(status)
  219. }
  220. // implements http.Hijacker
  221. func (w *responseWriterWrapper) Hijack() (net.Conn, *bufio.ReadWriter, error) {
  222. return w.ResponseWriter.(http.Hijacker).Hijack()
  223. }
  224. type maxBytesHandler struct {
  225. h http.Handler
  226. n int64
  227. }
  228. func (h maxBytesHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
  229. r.Body = http.MaxBytesReader(w, r.Body, h.n)
  230. h.h.ServeHTTP(w, r)
  231. }
  232. // Listen starts a new net.Listener on the given address.
  233. // It returns an error if the address is invalid or the call to Listen() fails.
  234. func Listen(addr string, maxOpenConnections int) (listener net.Listener, err error) {
  235. parts := strings.SplitN(addr, "://", 2)
  236. if len(parts) != 2 {
  237. return nil, fmt.Errorf(
  238. "invalid listening address %s (use fully formed addresses, including the tcp:// or unix:// prefix)",
  239. addr,
  240. )
  241. }
  242. proto, addr := parts[0], parts[1]
  243. listener, err = net.Listen(proto, addr)
  244. if err != nil {
  245. return nil, fmt.Errorf("failed to listen on %v: %v", addr, err)
  246. }
  247. if maxOpenConnections > 0 {
  248. listener = netutil.LimitListener(listener, maxOpenConnections)
  249. }
  250. return listener, nil
  251. }