You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

895 lines
28 KiB

rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
rpc/test: wait for mempool CheckTx callback (#4908) Fixes race conditions causing the following test failures: ``` === RUN TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:342: Error Trace: rpc_test.go:342 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:343: Error Trace: rpc_test.go:343 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:345: Error Trace: rpc_test.go:345 Error: Not equal: expected: types.Txs{types.Tx{0x39, 0x44, 0x4d, 0x6c, 0x4b, 0x66, 0x46, 0x78, 0x3d, 0x45, 0x33, 0x33, 0x68, 0x47, 0x6e, 0x79, 0x58}} actual : types.Txs(nil) Diff: --- Expected +++ Actual @@ -1,4 +1,2 @@ -(types.Txs) (len=1) { - (types.Tx) (len=17) Tx{39444D6C4B6646783D45333368476E7958} -} +(types.Txs) <nil> Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:342: Error Trace: rpc_test.go:342 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:343: Error Trace: rpc_test.go:343 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:345: Error Trace: rpc_test.go:345 Error: Not equal: expected: types.Txs{types.Tx{0x39, 0x44, 0x4d, 0x6c, 0x4b, 0x66, 0x46, 0x78, 0x3d, 0x45, 0x33, 0x33, 0x68, 0x47, 0x6e, 0x79, 0x58}} actual : types.Txs{} Diff: --- Expected +++ Actual @@ -1,3 +1,2 @@ -(types.Txs) (len=1) { - (types.Tx) (len=17) Tx{39444D6C4B6646783D45333368476E7958} +(types.Txs) { } Test: TestUnconfirmedTxs --- FAIL: TestUnconfirmedTxs (0.20s) === RUN TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:364: Error Trace: rpc_test.go:364 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:365: Error Trace: rpc_test.go:365 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:364: Error Trace: rpc_test.go:364 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:365: Error Trace: rpc_test.go:365 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs --- FAIL: TestNumUnconfirmedTxs (0.09s) ```
5 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
rpc/lib/client & server: try to conform to JSON-RPC 2.0 spec (#4141) https://www.jsonrpc.org/specification What is done in this PR: JSONRPCClient: validate that Response.ID matches Request.ID I wanted to do the same for the WSClient, but since we're sending events as responses, not notifications, checking IDs would require storing them in memory indefinitely (and we won't be able to remove them upon client unsubscribing because ID is different then). Request.ID is now optional. Notification is a Request without an ID. Previously "" or 0 were considered as notifications Remove #event suffix from ID from an event response (partially fixes #2949) ID must be either string, int or null AND must be equal to request's ID. Now, because we've implemented events as responses, WS clients are tripping when they see Response.ID("0#event") != Request.ID("0"). Implementing events as requests would require a lot of time (~ 2 days to completely rewrite WS client and server) generate unique ID for each request switch to integer IDs instead of "json-client-XYZ" id=0 method=/subscribe id=0 result=... id=1 method=/abci_query id=1 result=... > send events (resulting from /subscribe) as requests+notifications (not responses) this will require a lot of work. probably not worth it * rpc: generate an unique ID for each request in conformance with JSON-RPC spec * WSClient: check for unsolicited responses * fix golangci warnings * save commit * fix errors * remove ID from responses from subscribe Refs #2949 * clients are safe for concurrent access * tm-bench: switch to int ID * fixes after my own review * comment out sentIDs in WSClient see commit body for the reason * remove body.Close it will be closed automatically * stop ws connection outside of write/read routines also, use t.Rate in tm-bench indexer when calculating ID fix gocritic issues * update swagger.yaml * Apply suggestions from code review * fix stylecheck and golint linter warnings * update changelog * update changelog2
5 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
rpc/lib/client & server: try to conform to JSON-RPC 2.0 spec (#4141) https://www.jsonrpc.org/specification What is done in this PR: JSONRPCClient: validate that Response.ID matches Request.ID I wanted to do the same for the WSClient, but since we're sending events as responses, not notifications, checking IDs would require storing them in memory indefinitely (and we won't be able to remove them upon client unsubscribing because ID is different then). Request.ID is now optional. Notification is a Request without an ID. Previously "" or 0 were considered as notifications Remove #event suffix from ID from an event response (partially fixes #2949) ID must be either string, int or null AND must be equal to request's ID. Now, because we've implemented events as responses, WS clients are tripping when they see Response.ID("0#event") != Request.ID("0"). Implementing events as requests would require a lot of time (~ 2 days to completely rewrite WS client and server) generate unique ID for each request switch to integer IDs instead of "json-client-XYZ" id=0 method=/subscribe id=0 result=... id=1 method=/abci_query id=1 result=... > send events (resulting from /subscribe) as requests+notifications (not responses) this will require a lot of work. probably not worth it * rpc: generate an unique ID for each request in conformance with JSON-RPC spec * WSClient: check for unsolicited responses * fix golangci warnings * save commit * fix errors * remove ID from responses from subscribe Refs #2949 * clients are safe for concurrent access * tm-bench: switch to int ID * fixes after my own review * comment out sentIDs in WSClient see commit body for the reason * remove body.Close it will be closed automatically * stop ws connection outside of write/read routines also, use t.Rate in tm-bench indexer when calculating ID fix gocritic issues * update swagger.yaml * Apply suggestions from code review * fix stylecheck and golint linter warnings * update changelog * update changelog2
5 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
rpc/test: wait for mempool CheckTx callback (#4908) Fixes race conditions causing the following test failures: ``` === RUN TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:342: Error Trace: rpc_test.go:342 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:343: Error Trace: rpc_test.go:343 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:345: Error Trace: rpc_test.go:345 Error: Not equal: expected: types.Txs{types.Tx{0x39, 0x44, 0x4d, 0x6c, 0x4b, 0x66, 0x46, 0x78, 0x3d, 0x45, 0x33, 0x33, 0x68, 0x47, 0x6e, 0x79, 0x58}} actual : types.Txs(nil) Diff: --- Expected +++ Actual @@ -1,4 +1,2 @@ -(types.Txs) (len=1) { - (types.Tx) (len=17) Tx{39444D6C4B6646783D45333368476E7958} -} +(types.Txs) <nil> Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:342: Error Trace: rpc_test.go:342 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:343: Error Trace: rpc_test.go:343 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:345: Error Trace: rpc_test.go:345 Error: Not equal: expected: types.Txs{types.Tx{0x39, 0x44, 0x4d, 0x6c, 0x4b, 0x66, 0x46, 0x78, 0x3d, 0x45, 0x33, 0x33, 0x68, 0x47, 0x6e, 0x79, 0x58}} actual : types.Txs{} Diff: --- Expected +++ Actual @@ -1,3 +1,2 @@ -(types.Txs) (len=1) { - (types.Tx) (len=17) Tx{39444D6C4B6646783D45333368476E7958} +(types.Txs) { } Test: TestUnconfirmedTxs --- FAIL: TestUnconfirmedTxs (0.20s) === RUN TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:364: Error Trace: rpc_test.go:364 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:365: Error Trace: rpc_test.go:365 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:364: Error Trace: rpc_test.go:364 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:365: Error Trace: rpc_test.go:365 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs --- FAIL: TestNumUnconfirmedTxs (0.09s) ```
5 years ago
rpc/test: wait for mempool CheckTx callback (#4908) Fixes race conditions causing the following test failures: ``` === RUN TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:342: Error Trace: rpc_test.go:342 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:343: Error Trace: rpc_test.go:343 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:345: Error Trace: rpc_test.go:345 Error: Not equal: expected: types.Txs{types.Tx{0x39, 0x44, 0x4d, 0x6c, 0x4b, 0x66, 0x46, 0x78, 0x3d, 0x45, 0x33, 0x33, 0x68, 0x47, 0x6e, 0x79, 0x58}} actual : types.Txs(nil) Diff: --- Expected +++ Actual @@ -1,4 +1,2 @@ -(types.Txs) (len=1) { - (types.Tx) (len=17) Tx{39444D6C4B6646783D45333368476E7958} -} +(types.Txs) <nil> Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:342: Error Trace: rpc_test.go:342 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:343: Error Trace: rpc_test.go:343 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:345: Error Trace: rpc_test.go:345 Error: Not equal: expected: types.Txs{types.Tx{0x39, 0x44, 0x4d, 0x6c, 0x4b, 0x66, 0x46, 0x78, 0x3d, 0x45, 0x33, 0x33, 0x68, 0x47, 0x6e, 0x79, 0x58}} actual : types.Txs{} Diff: --- Expected +++ Actual @@ -1,3 +1,2 @@ -(types.Txs) (len=1) { - (types.Tx) (len=17) Tx{39444D6C4B6646783D45333368476E7958} +(types.Txs) { } Test: TestUnconfirmedTxs --- FAIL: TestUnconfirmedTxs (0.20s) === RUN TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:364: Error Trace: rpc_test.go:364 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:365: Error Trace: rpc_test.go:365 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:364: Error Trace: rpc_test.go:364 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:365: Error Trace: rpc_test.go:365 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs --- FAIL: TestNumUnconfirmedTxs (0.09s) ```
5 years ago
rpc/test: wait for mempool CheckTx callback (#4908) Fixes race conditions causing the following test failures: ``` === RUN TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:342: Error Trace: rpc_test.go:342 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:343: Error Trace: rpc_test.go:343 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:345: Error Trace: rpc_test.go:345 Error: Not equal: expected: types.Txs{types.Tx{0x39, 0x44, 0x4d, 0x6c, 0x4b, 0x66, 0x46, 0x78, 0x3d, 0x45, 0x33, 0x33, 0x68, 0x47, 0x6e, 0x79, 0x58}} actual : types.Txs(nil) Diff: --- Expected +++ Actual @@ -1,4 +1,2 @@ -(types.Txs) (len=1) { - (types.Tx) (len=17) Tx{39444D6C4B6646783D45333368476E7958} -} +(types.Txs) <nil> Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:342: Error Trace: rpc_test.go:342 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:343: Error Trace: rpc_test.go:343 Error: Not equal: expected: 1 actual : 0 Test: TestUnconfirmedTxs TestUnconfirmedTxs: rpc_test.go:345: Error Trace: rpc_test.go:345 Error: Not equal: expected: types.Txs{types.Tx{0x39, 0x44, 0x4d, 0x6c, 0x4b, 0x66, 0x46, 0x78, 0x3d, 0x45, 0x33, 0x33, 0x68, 0x47, 0x6e, 0x79, 0x58}} actual : types.Txs{} Diff: --- Expected +++ Actual @@ -1,3 +1,2 @@ -(types.Txs) (len=1) { - (types.Tx) (len=17) Tx{39444D6C4B6646783D45333368476E7958} +(types.Txs) { } Test: TestUnconfirmedTxs --- FAIL: TestUnconfirmedTxs (0.20s) === RUN TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:364: Error Trace: rpc_test.go:364 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:365: Error Trace: rpc_test.go:365 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:364: Error Trace: rpc_test.go:364 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs TestNumUnconfirmedTxs: rpc_test.go:365: Error Trace: rpc_test.go:365 Error: Not equal: expected: 1 actual : 0 Test: TestNumUnconfirmedTxs --- FAIL: TestNumUnconfirmedTxs (0.09s) ```
5 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
cleanup: Reduce and normalize import path aliasing. (#6975) The code in the Tendermint repository makes heavy use of import aliasing. This is made necessary by our extensive reuse of common base package names, and by repetition of similar names across different subdirectories. Unfortunately we have not been very consistent about which packages we alias in various circumstances, and the aliases we use vary. In the spirit of the advice in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports, his change makes an effort to clean up and normalize import aliasing. This change makes no API or behavioral changes. It is a pure cleanup intended o help make the code more readable to developers (including myself) trying to understand what is being imported where. Only unexported names have been modified, and the changes were generated and applied mechanically with gofmt -r and comby, respecting the lexical and syntactic rules of Go. Even so, I did not fix every inconsistency. Where the changes would be too disruptive, I left it alone. The principles I followed in this cleanup are: - Remove aliases that restate the package name. - Remove aliases where the base package name is unambiguous. - Move overly-terse abbreviations from the import to the usage site. - Fix lexical issues (remove underscores, remove capitalization). - Fix import groupings to more closely match the style guide. - Group blank (side-effecting) imports and ensure they are commented. - Add aliases to multiple imports with the same base package name.
3 years ago
rpc: add support for batched requests/responses (#3534) Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213). * Add JSON RPC batching for client and server As per #3213, this adds support for [JSON RPC batch requests and responses](https://www.jsonrpc.org/specification#batch). * Add additional checks to ensure client responses are the same as results * Fix case where a notification is sent and no response is expected * Add test to check that JSON RPC notifications in a batch are left out in responses * Update CHANGELOG_PENDING.md * Update PR number now that PR has been created * Make errors start with lowercase letter * Refactor batch functionality to be standalone This refactors the batching functionality to rather act in a standalone way. In light of supporting concurrent goroutines making use of the same client, it would make sense to have batching functionality where one could create a batch of requests per goroutine and send that batch without interfering with a batch from another goroutine. * Add examples for simple and batch HTTP client usage * Check errors from writer and remove nolinter directives * Make error strings start with lowercase letter * Refactor examples to make them testable * Use safer deferred shutdown for example Tendermint test node * Recompose rpcClient interface from pre-existing interface components * Rename WaitGroup for brevity * Replace empty ID string with request ID * Remove extraneous test case * Convert first letter of errors.Wrap() messages to lowercase * Remove extraneous function parameter * Make variable declaration terse * Reorder WaitGroup.Done call to help prevent race conditions in the face of failure * Swap mutex to value representation and remove initialization * Restore empty JSONRPC string ID in response to prevent nil * Make JSONRPCBufferedRequest private * Revert PR hard link in CHANGELOG_PENDING * Add client ID for JSONRPCClient This adds code to automatically generate a randomized client ID for the JSONRPCClient, and adds a check of the IDs in the responses (if one was set in the requests). * Extract response ID validation into separate function * Remove extraneous comments * Reorder fields to indicate clearly which are protected by the mutex * Refactor for loop to remove indexing * Restructure and combine loop * Flatten conditional block for better readability * Make multi-variable declaration slightly more readable * Change for loop style * Compress error check statements * Make function description more generic to show that we support different protocols * Preallocate memory for request and result objects
6 years ago
  1. package client_test
  2. import (
  3. "bytes"
  4. "context"
  5. "encoding/base64"
  6. "fmt"
  7. "math"
  8. "net/http"
  9. "strings"
  10. "sync"
  11. "testing"
  12. "time"
  13. "github.com/stretchr/testify/assert"
  14. "github.com/stretchr/testify/require"
  15. abci "github.com/tendermint/tendermint/abci/types"
  16. "github.com/tendermint/tendermint/config"
  17. "github.com/tendermint/tendermint/crypto/ed25519"
  18. "github.com/tendermint/tendermint/crypto/encoding"
  19. "github.com/tendermint/tendermint/internal/mempool"
  20. tmjson "github.com/tendermint/tendermint/libs/json"
  21. "github.com/tendermint/tendermint/libs/log"
  22. tmmath "github.com/tendermint/tendermint/libs/math"
  23. "github.com/tendermint/tendermint/libs/service"
  24. "github.com/tendermint/tendermint/privval"
  25. "github.com/tendermint/tendermint/rpc/client"
  26. rpchttp "github.com/tendermint/tendermint/rpc/client/http"
  27. rpclocal "github.com/tendermint/tendermint/rpc/client/local"
  28. "github.com/tendermint/tendermint/rpc/coretypes"
  29. rpcclient "github.com/tendermint/tendermint/rpc/jsonrpc/client"
  30. "github.com/tendermint/tendermint/types"
  31. )
  32. func getHTTPClient(t *testing.T, conf *config.Config) *rpchttp.HTTP {
  33. t.Helper()
  34. rpcAddr := conf.RPC.ListenAddress
  35. c, err := rpchttp.NewWithClient(rpcAddr, http.DefaultClient)
  36. require.NoError(t, err)
  37. c.Logger = log.NewTestingLogger(t)
  38. t.Cleanup(func() {
  39. if c.IsRunning() {
  40. require.NoError(t, c.Stop())
  41. }
  42. })
  43. return c
  44. }
  45. func getHTTPClientWithTimeout(t *testing.T, conf *config.Config, timeout time.Duration) *rpchttp.HTTP {
  46. t.Helper()
  47. rpcAddr := conf.RPC.ListenAddress
  48. http.DefaultClient.Timeout = timeout
  49. c, err := rpchttp.NewWithClient(rpcAddr, http.DefaultClient)
  50. require.NoError(t, err)
  51. c.Logger = log.NewTestingLogger(t)
  52. t.Cleanup(func() {
  53. http.DefaultClient.Timeout = 0
  54. if c.IsRunning() {
  55. require.NoError(t, c.Stop())
  56. }
  57. })
  58. return c
  59. }
  60. // GetClients returns a slice of clients for table-driven tests
  61. func GetClients(t *testing.T, ns service.Service, conf *config.Config) []client.Client {
  62. t.Helper()
  63. node, ok := ns.(rpclocal.NodeService)
  64. require.True(t, ok)
  65. ncl, err := rpclocal.New(node)
  66. require.NoError(t, err)
  67. return []client.Client{
  68. ncl,
  69. getHTTPClient(t, conf),
  70. }
  71. }
  72. func TestClientOperations(t *testing.T) {
  73. ctx, cancel := context.WithCancel(context.Background())
  74. defer cancel()
  75. _, conf := NodeSuite(t)
  76. t.Run("NilCustomHTTPClient", func(t *testing.T) {
  77. _, err := rpchttp.NewWithClient("http://example.com", nil)
  78. require.Error(t, err)
  79. _, err = rpcclient.NewWithHTTPClient("http://example.com", nil)
  80. require.Error(t, err)
  81. })
  82. t.Run("ParseInvalidAddress", func(t *testing.T) {
  83. // should remove trailing /
  84. invalidRemote := conf.RPC.ListenAddress + "/"
  85. _, err := rpchttp.New(invalidRemote)
  86. require.NoError(t, err)
  87. })
  88. t.Run("CustomHTTPClient", func(t *testing.T) {
  89. remote := conf.RPC.ListenAddress
  90. c, err := rpchttp.NewWithClient(remote, http.DefaultClient)
  91. require.NoError(t, err)
  92. status, err := c.Status(ctx)
  93. require.NoError(t, err)
  94. require.NotNil(t, status)
  95. })
  96. t.Run("CorsEnabled", func(t *testing.T) {
  97. origin := conf.RPC.CORSAllowedOrigins[0]
  98. remote := strings.ReplaceAll(conf.RPC.ListenAddress, "tcp", "http")
  99. req, err := http.NewRequestWithContext(ctx, "GET", remote, nil)
  100. require.NoError(t, err, "%+v", err)
  101. req.Header.Set("Origin", origin)
  102. resp, err := http.DefaultClient.Do(req)
  103. require.NoError(t, err, "%+v", err)
  104. defer resp.Body.Close()
  105. assert.Equal(t, resp.Header.Get("Access-Control-Allow-Origin"), origin)
  106. })
  107. t.Run("Batching", func(t *testing.T) {
  108. t.Run("JSONRPCCalls", func(t *testing.T) {
  109. c := getHTTPClient(t, conf)
  110. testBatchedJSONRPCCalls(ctx, t, c)
  111. })
  112. t.Run("JSONRPCCallsCancellation", func(t *testing.T) {
  113. _, _, tx1 := MakeTxKV()
  114. _, _, tx2 := MakeTxKV()
  115. c := getHTTPClient(t, conf)
  116. batch := c.NewBatch()
  117. _, err := batch.BroadcastTxCommit(ctx, tx1)
  118. require.NoError(t, err)
  119. _, err = batch.BroadcastTxCommit(ctx, tx2)
  120. require.NoError(t, err)
  121. // we should have 2 requests waiting
  122. require.Equal(t, 2, batch.Count())
  123. // we want to make sure we cleared 2 pending requests
  124. require.Equal(t, 2, batch.Clear())
  125. // now there should be no batched requests
  126. require.Equal(t, 0, batch.Count())
  127. })
  128. t.Run("SendingEmptyRequest", func(t *testing.T) {
  129. c := getHTTPClient(t, conf)
  130. batch := c.NewBatch()
  131. _, err := batch.Send(ctx)
  132. require.Error(t, err, "sending an empty batch of JSON RPC requests should result in an error")
  133. })
  134. t.Run("ClearingEmptyRequest", func(t *testing.T) {
  135. c := getHTTPClient(t, conf)
  136. batch := c.NewBatch()
  137. require.Zero(t, batch.Clear(), "clearing an empty batch of JSON RPC requests should result in a 0 result")
  138. })
  139. t.Run("ConcurrentJSONRPC", func(t *testing.T) {
  140. var wg sync.WaitGroup
  141. c := getHTTPClient(t, conf)
  142. for i := 0; i < 50; i++ {
  143. wg.Add(1)
  144. go func() {
  145. defer wg.Done()
  146. testBatchedJSONRPCCalls(ctx, t, c)
  147. }()
  148. }
  149. wg.Wait()
  150. })
  151. })
  152. t.Run("HTTPReturnsErrorIfClientIsNotRunning", func(t *testing.T) {
  153. c := getHTTPClientWithTimeout(t, conf, 100*time.Millisecond)
  154. // on Subscribe
  155. _, err := c.Subscribe(ctx, "TestHeaderEvents",
  156. types.QueryForEvent(types.EventNewBlockHeaderValue).String())
  157. assert.Error(t, err)
  158. // on Unsubscribe
  159. err = c.Unsubscribe(ctx, "TestHeaderEvents",
  160. types.QueryForEvent(types.EventNewBlockHeaderValue).String())
  161. assert.Error(t, err)
  162. // on UnsubscribeAll
  163. err = c.UnsubscribeAll(ctx, "TestHeaderEvents")
  164. assert.Error(t, err)
  165. })
  166. }
  167. // Make sure info is correct (we connect properly)
  168. func TestClientMethodCalls(t *testing.T) {
  169. ctx, cancel := context.WithCancel(context.Background())
  170. defer cancel()
  171. n, conf := NodeSuite(t)
  172. // for broadcast tx tests
  173. pool := getMempool(t, n)
  174. // for evidence tests
  175. pv, err := privval.LoadOrGenFilePV(conf.PrivValidator.KeyFile(), conf.PrivValidator.StateFile())
  176. require.NoError(t, err)
  177. for i, c := range GetClients(t, n, conf) {
  178. t.Run(fmt.Sprintf("%T", c), func(t *testing.T) {
  179. t.Run("Status", func(t *testing.T) {
  180. status, err := c.Status(ctx)
  181. require.NoError(t, err, "%d: %+v", i, err)
  182. assert.Equal(t, conf.Moniker, status.NodeInfo.Moniker)
  183. })
  184. t.Run("Info", func(t *testing.T) {
  185. info, err := c.ABCIInfo(ctx)
  186. require.NoError(t, err)
  187. status, err := c.Status(ctx)
  188. require.NoError(t, err)
  189. assert.GreaterOrEqual(t, status.SyncInfo.LatestBlockHeight, info.Response.LastBlockHeight)
  190. assert.True(t, strings.Contains(info.Response.Data, "size"))
  191. })
  192. t.Run("NetInfo", func(t *testing.T) {
  193. nc, ok := c.(client.NetworkClient)
  194. require.True(t, ok, "%d", i)
  195. netinfo, err := nc.NetInfo(ctx)
  196. require.NoError(t, err, "%d: %+v", i, err)
  197. assert.True(t, netinfo.Listening)
  198. assert.Equal(t, 0, len(netinfo.Peers))
  199. })
  200. t.Run("DumpConsensusState", func(t *testing.T) {
  201. // FIXME: fix server so it doesn't panic on invalid input
  202. nc, ok := c.(client.NetworkClient)
  203. require.True(t, ok, "%d", i)
  204. cons, err := nc.DumpConsensusState(ctx)
  205. require.NoError(t, err, "%d: %+v", i, err)
  206. assert.NotEmpty(t, cons.RoundState)
  207. assert.Empty(t, cons.Peers)
  208. })
  209. t.Run("ConsensusState", func(t *testing.T) {
  210. // FIXME: fix server so it doesn't panic on invalid input
  211. nc, ok := c.(client.NetworkClient)
  212. require.True(t, ok, "%d", i)
  213. cons, err := nc.ConsensusState(ctx)
  214. require.NoError(t, err, "%d: %+v", i, err)
  215. assert.NotEmpty(t, cons.RoundState)
  216. })
  217. t.Run("Health", func(t *testing.T) {
  218. nc, ok := c.(client.NetworkClient)
  219. require.True(t, ok, "%d", i)
  220. _, err := nc.Health(ctx)
  221. require.NoError(t, err, "%d: %+v", i, err)
  222. })
  223. t.Run("GenesisAndValidators", func(t *testing.T) {
  224. // make sure this is the right genesis file
  225. gen, err := c.Genesis(ctx)
  226. require.NoError(t, err, "%d: %+v", i, err)
  227. // get the genesis validator
  228. require.Equal(t, 1, len(gen.Genesis.Validators))
  229. gval := gen.Genesis.Validators[0]
  230. // get the current validators
  231. h := int64(1)
  232. vals, err := c.Validators(ctx, &h, nil, nil)
  233. require.NoError(t, err, "%d: %+v", i, err)
  234. require.Equal(t, 1, len(vals.Validators))
  235. require.Equal(t, 1, vals.Count)
  236. require.Equal(t, 1, vals.Total)
  237. val := vals.Validators[0]
  238. // make sure the current set is also the genesis set
  239. assert.Equal(t, gval.Power, val.VotingPower)
  240. assert.Equal(t, gval.PubKey, val.PubKey)
  241. })
  242. t.Run("GenesisChunked", func(t *testing.T) {
  243. first, err := c.GenesisChunked(ctx, 0)
  244. require.NoError(t, err)
  245. decoded := make([]string, 0, first.TotalChunks)
  246. for i := 0; i < first.TotalChunks; i++ {
  247. chunk, err := c.GenesisChunked(ctx, uint(i))
  248. require.NoError(t, err)
  249. data, err := base64.StdEncoding.DecodeString(chunk.Data)
  250. require.NoError(t, err)
  251. decoded = append(decoded, string(data))
  252. }
  253. doc := []byte(strings.Join(decoded, ""))
  254. var out types.GenesisDoc
  255. require.NoError(t, tmjson.Unmarshal(doc, &out),
  256. "first: %+v, doc: %s", first, string(doc))
  257. })
  258. t.Run("ABCIQuery", func(t *testing.T) {
  259. // write something
  260. k, v, tx := MakeTxKV()
  261. status, err := c.Status(ctx)
  262. require.NoError(t, err)
  263. _, err = c.BroadcastTxSync(ctx, tx)
  264. require.NoError(t, err, "%d: %+v", i, err)
  265. apph := status.SyncInfo.LatestBlockHeight + 2 // this is where the tx will be applied to the state
  266. // wait before querying
  267. err = client.WaitForHeight(ctx, c, apph, nil)
  268. require.NoError(t, err)
  269. res, err := c.ABCIQuery(ctx, "/key", k)
  270. qres := res.Response
  271. if assert.NoError(t, err) && assert.True(t, qres.IsOK()) {
  272. assert.EqualValues(t, v, qres.Value)
  273. }
  274. })
  275. t.Run("AppCalls", func(t *testing.T) {
  276. // get an offset of height to avoid racing and guessing
  277. s, err := c.Status(ctx)
  278. require.NoError(t, err)
  279. // sh is start height or status height
  280. sh := s.SyncInfo.LatestBlockHeight
  281. // look for the future
  282. h := sh + 20
  283. _, err = c.Block(ctx, &h)
  284. require.Error(t, err) // no block yet
  285. // write something
  286. k, v, tx := MakeTxKV()
  287. bres, err := c.BroadcastTxCommit(ctx, tx)
  288. require.NoError(t, err)
  289. require.True(t, bres.DeliverTx.IsOK())
  290. txh := bres.Height
  291. apph := txh + 1 // this is where the tx will be applied to the state
  292. // wait before querying
  293. err = client.WaitForHeight(ctx, c, apph, nil)
  294. require.NoError(t, err)
  295. _qres, err := c.ABCIQueryWithOptions(ctx, "/key", k, client.ABCIQueryOptions{Prove: false})
  296. require.NoError(t, err)
  297. qres := _qres.Response
  298. if assert.True(t, qres.IsOK()) {
  299. assert.Equal(t, k, qres.Key)
  300. assert.EqualValues(t, v, qres.Value)
  301. }
  302. // make sure we can lookup the tx with proof
  303. ptx, err := c.Tx(ctx, bres.Hash, true)
  304. require.NoError(t, err)
  305. assert.EqualValues(t, txh, ptx.Height)
  306. assert.EqualValues(t, tx, ptx.Tx)
  307. // and we can even check the block is added
  308. block, err := c.Block(ctx, &apph)
  309. require.NoError(t, err)
  310. appHash := block.Block.Header.AppHash
  311. assert.True(t, len(appHash) > 0)
  312. assert.EqualValues(t, apph, block.Block.Header.Height)
  313. blockByHash, err := c.BlockByHash(ctx, block.BlockID.Hash)
  314. require.NoError(t, err)
  315. require.Equal(t, block, blockByHash)
  316. // check that the header matches the block hash
  317. header, err := c.Header(ctx, &apph)
  318. require.NoError(t, err)
  319. require.Equal(t, block.Block.Header, *header.Header)
  320. headerByHash, err := c.HeaderByHash(ctx, block.BlockID.Hash)
  321. require.NoError(t, err)
  322. require.Equal(t, header, headerByHash)
  323. // now check the results
  324. blockResults, err := c.BlockResults(ctx, &txh)
  325. require.NoError(t, err, "%d: %+v", i, err)
  326. assert.Equal(t, txh, blockResults.Height)
  327. if assert.Equal(t, 1, len(blockResults.TxsResults)) {
  328. // check success code
  329. assert.EqualValues(t, 0, blockResults.TxsResults[0].Code)
  330. }
  331. // check blockchain info, now that we know there is info
  332. info, err := c.BlockchainInfo(ctx, apph, apph)
  333. require.NoError(t, err)
  334. assert.True(t, info.LastHeight >= apph)
  335. if assert.Equal(t, 1, len(info.BlockMetas)) {
  336. lastMeta := info.BlockMetas[0]
  337. assert.EqualValues(t, apph, lastMeta.Header.Height)
  338. blockData := block.Block
  339. assert.Equal(t, blockData.Header.AppHash, lastMeta.Header.AppHash)
  340. assert.Equal(t, block.BlockID, lastMeta.BlockID)
  341. }
  342. // and get the corresponding commit with the same apphash
  343. commit, err := c.Commit(ctx, &apph)
  344. require.NoError(t, err)
  345. cappHash := commit.Header.AppHash
  346. assert.Equal(t, appHash, cappHash)
  347. assert.NotNil(t, commit.Commit)
  348. // compare the commits (note Commit(2) has commit from Block(3))
  349. h = apph - 1
  350. commit2, err := c.Commit(ctx, &h)
  351. require.NoError(t, err)
  352. assert.Equal(t, block.Block.LastCommitHash, commit2.Commit.Hash())
  353. // and we got a proof that works!
  354. _pres, err := c.ABCIQueryWithOptions(ctx, "/key", k, client.ABCIQueryOptions{Prove: true})
  355. require.NoError(t, err)
  356. pres := _pres.Response
  357. assert.True(t, pres.IsOK())
  358. // XXX Test proof
  359. })
  360. t.Run("BlockchainInfo", func(t *testing.T) {
  361. ctx, cancel := context.WithCancel(context.Background())
  362. defer cancel()
  363. err := client.WaitForHeight(ctx, c, 10, nil)
  364. require.NoError(t, err)
  365. res, err := c.BlockchainInfo(ctx, 0, 0)
  366. require.NoError(t, err, "%d: %+v", i, err)
  367. assert.True(t, res.LastHeight > 0)
  368. assert.True(t, len(res.BlockMetas) > 0)
  369. res, err = c.BlockchainInfo(ctx, 1, 1)
  370. require.NoError(t, err, "%d: %+v", i, err)
  371. assert.True(t, res.LastHeight > 0)
  372. assert.True(t, len(res.BlockMetas) == 1)
  373. res, err = c.BlockchainInfo(ctx, 1, 10000)
  374. require.NoError(t, err, "%d: %+v", i, err)
  375. assert.True(t, res.LastHeight > 0)
  376. assert.True(t, len(res.BlockMetas) < 100)
  377. for _, m := range res.BlockMetas {
  378. assert.NotNil(t, m)
  379. }
  380. res, err = c.BlockchainInfo(ctx, 10000, 1)
  381. require.Error(t, err)
  382. assert.Nil(t, res)
  383. assert.Contains(t, err.Error(), "can't be greater than max")
  384. })
  385. t.Run("BroadcastTxCommit", func(t *testing.T) {
  386. _, _, tx := MakeTxKV()
  387. bres, err := c.BroadcastTxCommit(ctx, tx)
  388. require.NoError(t, err, "%d: %+v", i, err)
  389. require.True(t, bres.CheckTx.IsOK())
  390. require.True(t, bres.DeliverTx.IsOK())
  391. require.Equal(t, 0, pool.Size())
  392. })
  393. t.Run("BroadcastTxSync", func(t *testing.T) {
  394. _, _, tx := MakeTxKV()
  395. initMempoolSize := pool.Size()
  396. bres, err := c.BroadcastTxSync(ctx, tx)
  397. require.NoError(t, err, "%d: %+v", i, err)
  398. require.Equal(t, bres.Code, abci.CodeTypeOK) // FIXME
  399. require.Equal(t, initMempoolSize+1, pool.Size())
  400. txs := pool.ReapMaxTxs(len(tx))
  401. require.EqualValues(t, tx, txs[0])
  402. pool.Flush()
  403. })
  404. t.Run("CheckTx", func(t *testing.T) {
  405. _, _, tx := MakeTxKV()
  406. res, err := c.CheckTx(ctx, tx)
  407. require.NoError(t, err)
  408. assert.Equal(t, abci.CodeTypeOK, res.Code)
  409. assert.Equal(t, 0, pool.Size(), "mempool must be empty")
  410. })
  411. t.Run("Events", func(t *testing.T) {
  412. // start for this test it if it wasn't already running
  413. if !c.IsRunning() {
  414. ctx, cancel := context.WithCancel(ctx)
  415. defer cancel()
  416. // if so, then we start it, listen, and stop it.
  417. err := c.Start(ctx)
  418. require.NoError(t, err)
  419. }
  420. t.Run("Header", func(t *testing.T) {
  421. evt, err := client.WaitForOneEvent(c, types.EventNewBlockHeaderValue, waitForEventTimeout)
  422. require.NoError(t, err, "%d: %+v", i, err)
  423. _, ok := evt.(types.EventDataNewBlockHeader)
  424. require.True(t, ok, "%d: %#v", i, evt)
  425. // TODO: more checks...
  426. })
  427. t.Run("Block", func(t *testing.T) {
  428. const subscriber = "TestBlockEvents"
  429. eventCh, err := c.Subscribe(ctx, subscriber, types.QueryForEvent(types.EventNewBlockValue).String())
  430. require.NoError(t, err)
  431. t.Cleanup(func() {
  432. if err := c.UnsubscribeAll(ctx, subscriber); err != nil {
  433. t.Error(err)
  434. }
  435. })
  436. var firstBlockHeight int64
  437. for i := int64(0); i < 3; i++ {
  438. event := <-eventCh
  439. blockEvent, ok := event.Data.(types.EventDataNewBlock)
  440. require.True(t, ok)
  441. block := blockEvent.Block
  442. if firstBlockHeight == 0 {
  443. firstBlockHeight = block.Header.Height
  444. }
  445. require.Equal(t, firstBlockHeight+i, block.Header.Height)
  446. }
  447. })
  448. t.Run("BroadcastTxAsync", func(t *testing.T) {
  449. testTxEventsSent(ctx, t, "async", c)
  450. })
  451. t.Run("BroadcastTxSync", func(t *testing.T) {
  452. testTxEventsSent(ctx, t, "sync", c)
  453. })
  454. })
  455. t.Run("Evidence", func(t *testing.T) {
  456. t.Run("BraodcastDuplicateVote", func(t *testing.T) {
  457. ctx, cancel := context.WithCancel(context.Background())
  458. defer cancel()
  459. chainID := conf.ChainID()
  460. correct, fakes := makeEvidences(t, pv, chainID)
  461. // make sure that the node has produced enough blocks
  462. waitForBlock(ctx, t, c, 2)
  463. result, err := c.BroadcastEvidence(ctx, correct)
  464. require.NoError(t, err, "BroadcastEvidence(%s) failed", correct)
  465. assert.Equal(t, correct.Hash(), result.Hash, "expected result hash to match evidence hash")
  466. status, err := c.Status(ctx)
  467. require.NoError(t, err)
  468. err = client.WaitForHeight(ctx, c, status.SyncInfo.LatestBlockHeight+2, nil)
  469. require.NoError(t, err)
  470. ed25519pub := pv.Key.PubKey.(ed25519.PubKey)
  471. rawpub := ed25519pub.Bytes()
  472. result2, err := c.ABCIQuery(ctx, "/val", rawpub)
  473. require.NoError(t, err)
  474. qres := result2.Response
  475. require.True(t, qres.IsOK())
  476. var v abci.ValidatorUpdate
  477. err = abci.ReadMessage(bytes.NewReader(qres.Value), &v)
  478. require.NoError(t, err, "Error reading query result, value %v", qres.Value)
  479. pk, err := encoding.PubKeyFromProto(v.PubKey)
  480. require.NoError(t, err)
  481. require.EqualValues(t, rawpub, pk, "Stored PubKey not equal with expected, value %v", string(qres.Value))
  482. require.Equal(t, int64(9), v.Power, "Stored Power not equal with expected, value %v", string(qres.Value))
  483. for _, fake := range fakes {
  484. _, err := c.BroadcastEvidence(ctx, fake)
  485. require.Error(t, err, "BroadcastEvidence(%s) succeeded, but the evidence was fake", fake)
  486. }
  487. })
  488. t.Run("BroadcastEmpty", func(t *testing.T) {
  489. _, err := c.BroadcastEvidence(ctx, nil)
  490. assert.Error(t, err)
  491. })
  492. })
  493. })
  494. }
  495. }
  496. func getMempool(t *testing.T, srv service.Service) mempool.Mempool {
  497. t.Helper()
  498. n, ok := srv.(interface {
  499. Mempool() mempool.Mempool
  500. })
  501. require.True(t, ok)
  502. return n.Mempool()
  503. }
  504. // these cases are roughly the same as the TestClientMethodCalls, but
  505. // they have to loop over their clients in the individual test cases,
  506. // so making a separate suite makes more sense, though isn't strictly
  507. // speaking desirable.
  508. func TestClientMethodCallsAdvanced(t *testing.T) {
  509. ctx, cancel := context.WithCancel(context.Background())
  510. defer cancel()
  511. n, conf := NodeSuite(t)
  512. pool := getMempool(t, n)
  513. t.Run("UnconfirmedTxs", func(t *testing.T) {
  514. _, _, tx := MakeTxKV()
  515. ch := make(chan struct{})
  516. err := pool.CheckTx(ctx, tx, func(_ *abci.Response) { close(ch) }, mempool.TxInfo{})
  517. require.NoError(t, err)
  518. // wait for tx to arrive in mempoool.
  519. select {
  520. case <-ch:
  521. case <-time.After(5 * time.Second):
  522. t.Error("Timed out waiting for CheckTx callback")
  523. }
  524. for _, c := range GetClients(t, n, conf) {
  525. mc := c.(client.MempoolClient)
  526. limit := 1
  527. res, err := mc.UnconfirmedTxs(ctx, &limit)
  528. require.NoError(t, err)
  529. assert.Equal(t, 1, res.Count)
  530. assert.Equal(t, 1, res.Total)
  531. assert.Equal(t, pool.SizeBytes(), res.TotalBytes)
  532. assert.Exactly(t, types.Txs{tx}, types.Txs(res.Txs))
  533. }
  534. pool.Flush()
  535. })
  536. t.Run("NumUnconfirmedTxs", func(t *testing.T) {
  537. ch := make(chan struct{})
  538. pool := getMempool(t, n)
  539. _, _, tx := MakeTxKV()
  540. err := pool.CheckTx(ctx, tx, func(_ *abci.Response) { close(ch) }, mempool.TxInfo{})
  541. require.NoError(t, err)
  542. // wait for tx to arrive in mempoool.
  543. select {
  544. case <-ch:
  545. case <-time.After(5 * time.Second):
  546. t.Error("Timed out waiting for CheckTx callback")
  547. }
  548. mempoolSize := pool.Size()
  549. for i, c := range GetClients(t, n, conf) {
  550. mc, ok := c.(client.MempoolClient)
  551. require.True(t, ok, "%d", i)
  552. res, err := mc.NumUnconfirmedTxs(ctx)
  553. require.NoError(t, err, "%d: %+v", i, err)
  554. assert.Equal(t, mempoolSize, res.Count)
  555. assert.Equal(t, mempoolSize, res.Total)
  556. assert.Equal(t, pool.SizeBytes(), res.TotalBytes)
  557. }
  558. pool.Flush()
  559. })
  560. t.Run("Tx", func(t *testing.T) {
  561. c := getHTTPClient(t, conf)
  562. // first we broadcast a tx
  563. _, _, tx := MakeTxKV()
  564. bres, err := c.BroadcastTxCommit(ctx, tx)
  565. require.NoError(t, err, "%+v", err)
  566. txHeight := bres.Height
  567. txHash := bres.Hash
  568. anotherTxHash := types.Tx("a different tx").Hash()
  569. cases := []struct {
  570. valid bool
  571. prove bool
  572. hash []byte
  573. }{
  574. // only valid if correct hash provided
  575. {true, false, txHash},
  576. {true, true, txHash},
  577. {false, false, anotherTxHash},
  578. {false, true, anotherTxHash},
  579. {false, false, nil},
  580. {false, true, nil},
  581. }
  582. for _, c := range GetClients(t, n, conf) {
  583. t.Run(fmt.Sprintf("%T", c), func(t *testing.T) {
  584. for j, tc := range cases {
  585. t.Run(fmt.Sprintf("Case%d", j), func(t *testing.T) {
  586. // now we query for the tx.
  587. // since there's only one tx, we know index=0.
  588. ptx, err := c.Tx(ctx, tc.hash, tc.prove)
  589. if !tc.valid {
  590. require.Error(t, err)
  591. } else {
  592. require.NoError(t, err, "%+v", err)
  593. assert.EqualValues(t, txHeight, ptx.Height)
  594. assert.EqualValues(t, tx, ptx.Tx)
  595. assert.Zero(t, ptx.Index)
  596. assert.True(t, ptx.TxResult.IsOK())
  597. assert.EqualValues(t, txHash, ptx.Hash)
  598. // time to verify the proof
  599. proof := ptx.Proof
  600. if tc.prove && assert.EqualValues(t, tx, proof.Data) {
  601. assert.NoError(t, proof.Proof.Verify(proof.RootHash, txHash))
  602. }
  603. }
  604. })
  605. }
  606. })
  607. }
  608. })
  609. t.Run("TxSearchWithTimeout", func(t *testing.T) {
  610. timeoutClient := getHTTPClientWithTimeout(t, conf, 10*time.Second)
  611. _, _, tx := MakeTxKV()
  612. _, err := timeoutClient.BroadcastTxCommit(ctx, tx)
  613. require.NoError(t, err)
  614. // query using a compositeKey (see kvstore application)
  615. result, err := timeoutClient.TxSearch(ctx, "app.creator='Cosmoshi Netowoko'", false, nil, nil, "asc")
  616. require.NoError(t, err)
  617. require.Greater(t, len(result.Txs), 0, "expected a lot of transactions")
  618. })
  619. t.Run("TxSearch", func(t *testing.T) {
  620. t.Skip("Test Asserts Non-Deterministic Results")
  621. c := getHTTPClient(t, conf)
  622. // first we broadcast a few txs
  623. for i := 0; i < 10; i++ {
  624. _, _, tx := MakeTxKV()
  625. _, err := c.BroadcastTxSync(ctx, tx)
  626. require.NoError(t, err)
  627. }
  628. // since we're not using an isolated test server, we'll have lingering transactions
  629. // from other tests as well
  630. result, err := c.TxSearch(ctx, "tx.height >= 0", true, nil, nil, "asc")
  631. require.NoError(t, err)
  632. txCount := len(result.Txs)
  633. // pick out the last tx to have something to search for in tests
  634. find := result.Txs[len(result.Txs)-1]
  635. anotherTxHash := types.Tx("a different tx").Hash()
  636. for _, c := range GetClients(t, n, conf) {
  637. t.Run(fmt.Sprintf("%T", c), func(t *testing.T) {
  638. // now we query for the tx.
  639. result, err := c.TxSearch(ctx, fmt.Sprintf("tx.hash='%v'", find.Hash), true, nil, nil, "asc")
  640. require.NoError(t, err)
  641. require.Len(t, result.Txs, 1)
  642. require.Equal(t, find.Hash, result.Txs[0].Hash)
  643. ptx := result.Txs[0]
  644. assert.EqualValues(t, find.Height, ptx.Height)
  645. assert.EqualValues(t, find.Tx, ptx.Tx)
  646. assert.Zero(t, ptx.Index)
  647. assert.True(t, ptx.TxResult.IsOK())
  648. assert.EqualValues(t, find.Hash, ptx.Hash)
  649. // time to verify the proof
  650. if assert.EqualValues(t, find.Tx, ptx.Proof.Data) {
  651. assert.NoError(t, ptx.Proof.Proof.Verify(ptx.Proof.RootHash, find.Hash))
  652. }
  653. // query by height
  654. result, err = c.TxSearch(ctx, fmt.Sprintf("tx.height=%d", find.Height), true, nil, nil, "asc")
  655. require.NoError(t, err)
  656. require.Len(t, result.Txs, 1)
  657. // query for non existing tx
  658. result, err = c.TxSearch(ctx, fmt.Sprintf("tx.hash='%X'", anotherTxHash), false, nil, nil, "asc")
  659. require.NoError(t, err)
  660. require.Len(t, result.Txs, 0)
  661. // query using a compositeKey (see kvstore application)
  662. result, err = c.TxSearch(ctx, "app.creator='Cosmoshi Netowoko'", false, nil, nil, "asc")
  663. require.NoError(t, err)
  664. require.Greater(t, len(result.Txs), 0, "expected a lot of transactions")
  665. // query using an index key
  666. result, err = c.TxSearch(ctx, "app.index_key='index is working'", false, nil, nil, "asc")
  667. require.NoError(t, err)
  668. require.Greater(t, len(result.Txs), 0, "expected a lot of transactions")
  669. // query using an noindex key
  670. result, err = c.TxSearch(ctx, "app.noindex_key='index is working'", false, nil, nil, "asc")
  671. require.NoError(t, err)
  672. require.Equal(t, len(result.Txs), 0, "expected a lot of transactions")
  673. // query using a compositeKey (see kvstore application) and height
  674. result, err = c.TxSearch(ctx,
  675. "app.creator='Cosmoshi Netowoko' AND tx.height<10000", true, nil, nil, "asc")
  676. require.NoError(t, err)
  677. require.Greater(t, len(result.Txs), 0, "expected a lot of transactions")
  678. // query a non existing tx with page 1 and txsPerPage 1
  679. perPage := 1
  680. result, err = c.TxSearch(ctx, "app.creator='Cosmoshi Neetowoko'", true, nil, &perPage, "asc")
  681. require.NoError(t, err)
  682. require.Len(t, result.Txs, 0)
  683. // check sorting
  684. result, err = c.TxSearch(ctx, "tx.height >= 1", false, nil, nil, "asc")
  685. require.NoError(t, err)
  686. for k := 0; k < len(result.Txs)-1; k++ {
  687. require.LessOrEqual(t, result.Txs[k].Height, result.Txs[k+1].Height)
  688. require.LessOrEqual(t, result.Txs[k].Index, result.Txs[k+1].Index)
  689. }
  690. result, err = c.TxSearch(ctx, "tx.height >= 1", false, nil, nil, "desc")
  691. require.NoError(t, err)
  692. for k := 0; k < len(result.Txs)-1; k++ {
  693. require.GreaterOrEqual(t, result.Txs[k].Height, result.Txs[k+1].Height)
  694. require.GreaterOrEqual(t, result.Txs[k].Index, result.Txs[k+1].Index)
  695. }
  696. // check pagination
  697. perPage = 3
  698. var (
  699. seen = map[int64]bool{}
  700. maxHeight int64
  701. pages = int(math.Ceil(float64(txCount) / float64(perPage)))
  702. )
  703. for page := 1; page <= pages; page++ {
  704. page := page
  705. result, err := c.TxSearch(ctx, "tx.height >= 1", false, &page, &perPage, "asc")
  706. require.NoError(t, err)
  707. if page < pages {
  708. require.Len(t, result.Txs, perPage)
  709. } else {
  710. require.LessOrEqual(t, len(result.Txs), perPage)
  711. }
  712. require.Equal(t, txCount, result.TotalCount)
  713. for _, tx := range result.Txs {
  714. require.False(t, seen[tx.Height],
  715. "Found duplicate height %v in page %v", tx.Height, page)
  716. require.Greater(t, tx.Height, maxHeight,
  717. "Found decreasing height %v (max seen %v) in page %v", tx.Height, maxHeight, page)
  718. seen[tx.Height] = true
  719. maxHeight = tx.Height
  720. }
  721. }
  722. require.Len(t, seen, txCount)
  723. })
  724. }
  725. })
  726. }
  727. func testBatchedJSONRPCCalls(ctx context.Context, t *testing.T, c *rpchttp.HTTP) {
  728. k1, v1, tx1 := MakeTxKV()
  729. k2, v2, tx2 := MakeTxKV()
  730. batch := c.NewBatch()
  731. r1, err := batch.BroadcastTxCommit(ctx, tx1)
  732. require.NoError(t, err)
  733. r2, err := batch.BroadcastTxCommit(ctx, tx2)
  734. require.NoError(t, err)
  735. require.Equal(t, 2, batch.Count())
  736. bresults, err := batch.Send(ctx)
  737. require.NoError(t, err)
  738. require.Len(t, bresults, 2)
  739. require.Equal(t, 0, batch.Count())
  740. bresult1, ok := bresults[0].(*coretypes.ResultBroadcastTxCommit)
  741. require.True(t, ok)
  742. require.Equal(t, *bresult1, *r1)
  743. bresult2, ok := bresults[1].(*coretypes.ResultBroadcastTxCommit)
  744. require.True(t, ok)
  745. require.Equal(t, *bresult2, *r2)
  746. apph := tmmath.MaxInt64(bresult1.Height, bresult2.Height) + 1
  747. err = client.WaitForHeight(ctx, c, apph, nil)
  748. require.NoError(t, err)
  749. q1, err := batch.ABCIQuery(ctx, "/key", k1)
  750. require.NoError(t, err)
  751. q2, err := batch.ABCIQuery(ctx, "/key", k2)
  752. require.NoError(t, err)
  753. require.Equal(t, 2, batch.Count())
  754. qresults, err := batch.Send(ctx)
  755. require.NoError(t, err)
  756. require.Len(t, qresults, 2)
  757. require.Equal(t, 0, batch.Count())
  758. qresult1, ok := qresults[0].(*coretypes.ResultABCIQuery)
  759. require.True(t, ok)
  760. require.Equal(t, *qresult1, *q1)
  761. qresult2, ok := qresults[1].(*coretypes.ResultABCIQuery)
  762. require.True(t, ok)
  763. require.Equal(t, *qresult2, *q2)
  764. require.Equal(t, qresult1.Response.Key, k1)
  765. require.Equal(t, qresult2.Response.Key, k2)
  766. require.Equal(t, qresult1.Response.Value, v1)
  767. require.Equal(t, qresult2.Response.Value, v2)
  768. }