Fixes#5192.
@liamsi Can you verify that the test vectors match the Rust implementation? I updated `ProofsFromByteSlices()` as well, anything else that should be updated?
## Description
This PR removes simple prefix from all types in the crypto/merkle directory.
The two proto types `Proof` & `ProofOp` have been moved to the `proto/crypto/merkle` directory.
proto messge `Proof` was renamed to `ProofOps` and `SimpleProof` message to `Proof`.
Closes: #2755
* lint: golint issue fixes
- on my local machine golint is a lot stricter than the bot so slowly going through and fixing things.
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* more fixes from golint
* remove isPeerPersistentFn
* add changelog entry
* libs/common: refactor libs/common 2
- move random function to there own pkg
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* change imports and usage throughout repo
* fix goimports
* add changelog entry
(#2611) had suggested that an iterative version of
SimpleHashFromByteSlice would be faster, presumably because
we can envision some overhead accumulating from stack
frames and function calls. Additionally, a recursive algorithm risks
hitting the stack limit and causing a stack overflow should the tree
be too large.
Provided here is an iterative alternative, a simple test to assert
correctness and a benchmark. On the performance side, there appears to
be no overall difference:
```
BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op
BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op
```
On the surface it might seem that the additional overhead is due to
the different allocation patterns of the implementations. The recursive
version uses a single `[][]byte` slices which it then re-slices at each level of the tree.
The iterative version reproduces `[][]byte` once within the function and
then rewrites sub-slices of that array at each level of the tree.
Eexperimenting by modifying the code to simply calculate the
hash and not store the result show little to no difference in performance.
These preliminary results suggest:
1. The performance of the current implementation is pretty good
2. Go has low overhead for recursive functions
3. The performance of the SimpleHashFromByteSlice routine is dominated
by the actual hashing of data
Although this work is in no way exhaustive, point #3 suggests that
optimizations of this routine would need to take an alternative
approach to make significant improvements on the current performance.
Finally, considering that the recursive implementation is easier to
read, it might not be worthwhile to switch to a less intuitive
implementation for so little benefit.
* re-add slice re-writing
* [crypto] Document SimpleHashFromByteSlicesIterative
* Begin simple merkle compatibility PR
* Fix query_test
* Use trillian test vectors
* Change the split point per RFC 6962
* update spec
* refactor innerhash to match spec
* Update changelog
* Address @liamsi's comments
* Write the comment requested by @liamsi
* crypto/merkle: Remove byter in favor of plain byte slices
This PR is fully backwards compatible in terms of function output!
(The Go API differs though) The only test case changes was to refactor
it to be table driven.
* Update godocs per review comments
except now we calculate the max size using the maxPacketMsgSize()
function, which frees developers from having to know amino encoding
details.
plus, 10 additional bytes are added to leave the room for amino upgrades
(both making it more efficient / less efficient)