(#2611) had suggested that an iterative version of
SimpleHashFromByteSlice would be faster, presumably because
we can envision some overhead accumulating from stack
frames and function calls. Additionally, a recursive algorithm risks
hitting the stack limit and causing a stack overflow should the tree
be too large.
Provided here is an iterative alternative, a simple test to assert
correctness and a benchmark. On the performance side, there appears to
be no overall difference:
```
BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op
BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op
```
On the surface it might seem that the additional overhead is due to
the different allocation patterns of the implementations. The recursive
version uses a single `[][]byte` slices which it then re-slices at each level of the tree.
The iterative version reproduces `[][]byte` once within the function and
then rewrites sub-slices of that array at each level of the tree.
Eexperimenting by modifying the code to simply calculate the
hash and not store the result show little to no difference in performance.
These preliminary results suggest:
1. The performance of the current implementation is pretty good
2. Go has low overhead for recursive functions
3. The performance of the SimpleHashFromByteSlice routine is dominated
by the actual hashing of data
Although this work is in no way exhaustive, point #3 suggests that
optimizations of this routine would need to take an alternative
approach to make significant improvements on the current performance.
Finally, considering that the recursive implementation is easier to
read, it might not be worthwhile to switch to a less intuitive
implementation for so little benefit.
* re-add slice re-writing
* [crypto] Document SimpleHashFromByteSlicesIterative
For smaller static data structures that don't require immutable snapshots or mutability;
for instance the transactions and validation signatures of a block can be hashed using this simple merkle tree logic.