## Description
This PR removes simple prefix from all types in the crypto/merkle directory.
The two proto types `Proof` & `ProofOp` have been moved to the `proto/crypto/merkle` directory.
proto messge `Proof` was renamed to `ProofOps` and `SimpleProof` message to `Proof`.
Closes: #2755
Fixes#2593@alexanderbez This is used in a single place in the SDK, how upset are you about removing it?
______
For contributor use:
- [x] ~Wrote tests~
- [x] Updated CHANGELOG_PENDING.md
- [x] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [x] Updated relevant documentation (`docs/`) and code comments
- [x] Re-reviewed `Files changed` in the Github PR explorer
- [x] Applied Appropriate Labels
(#2611) had suggested that an iterative version of
SimpleHashFromByteSlice would be faster, presumably because
we can envision some overhead accumulating from stack
frames and function calls. Additionally, a recursive algorithm risks
hitting the stack limit and causing a stack overflow should the tree
be too large.
Provided here is an iterative alternative, a simple test to assert
correctness and a benchmark. On the performance side, there appears to
be no overall difference:
```
BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op
BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op
```
On the surface it might seem that the additional overhead is due to
the different allocation patterns of the implementations. The recursive
version uses a single `[][]byte` slices which it then re-slices at each level of the tree.
The iterative version reproduces `[][]byte` once within the function and
then rewrites sub-slices of that array at each level of the tree.
Eexperimenting by modifying the code to simply calculate the
hash and not store the result show little to no difference in performance.
These preliminary results suggest:
1. The performance of the current implementation is pretty good
2. Go has low overhead for recursive functions
3. The performance of the SimpleHashFromByteSlice routine is dominated
by the actual hashing of data
Although this work is in no way exhaustive, point #3 suggests that
optimizations of this routine would need to take an alternative
approach to make significant improvements on the current performance.
Finally, considering that the recursive implementation is easier to
read, it might not be worthwhile to switch to a less intuitive
implementation for so little benefit.
* re-add slice re-writing
* [crypto] Document SimpleHashFromByteSlicesIterative
* Begin simple merkle compatibility PR
* Fix query_test
* Use trillian test vectors
* Change the split point per RFC 6962
* update spec
* refactor innerhash to match spec
* Update changelog
* Address @liamsi's comments
* Write the comment requested by @liamsi
This is a performance regression, but will also spare the types directory
from knowing about RFC 6962, which is a more correct abstraction. For txs
this performance hit will be fixed soon with #2603. For evidence, the
performance impact is negligible due to it being capped at a small number.
* crypto/merkle: Remove byter in favor of plain byte slices
This PR is fully backwards compatible in terms of function output!
(The Go API differs though) The only test case changes was to refactor
it to be table driven.
* Update godocs per review comments
except now we calculate the max size using the maxPacketMsgSize()
function, which frees developers from having to know amino encoding
details.
plus, 10 additional bytes are added to leave the room for amino upgrades
(both making it more efficient / less efficient)