You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

112 lines
3.4 KiB

crypto: Proof of Concept for iterative version of SimpleHashFromByteSlices (#2611) (#3530) (#2611) had suggested that an iterative version of SimpleHashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large. Provided here is an iterative alternative, a simple test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference: ``` BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op ``` On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single `[][]byte` slices which it then re-slices at each level of the tree. The iterative version reproduces `[][]byte` once within the function and then rewrites sub-slices of that array at each level of the tree. Eexperimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance. These preliminary results suggest: 1. The performance of the current implementation is pretty good 2. Go has low overhead for recursive functions 3. The performance of the SimpleHashFromByteSlice routine is dominated by the actual hashing of data Although this work is in no way exhaustive, point #3 suggests that optimizations of this routine would need to take an alternative approach to make significant improvements on the current performance. Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit. * re-add slice re-writing * [crypto] Document SimpleHashFromByteSlicesIterative
5 years ago
crypto: Proof of Concept for iterative version of SimpleHashFromByteSlices (#2611) (#3530) (#2611) had suggested that an iterative version of SimpleHashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large. Provided here is an iterative alternative, a simple test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference: ``` BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op ``` On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single `[][]byte` slices which it then re-slices at each level of the tree. The iterative version reproduces `[][]byte` once within the function and then rewrites sub-slices of that array at each level of the tree. Eexperimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance. These preliminary results suggest: 1. The performance of the current implementation is pretty good 2. Go has low overhead for recursive functions 3. The performance of the SimpleHashFromByteSlice routine is dominated by the actual hashing of data Although this work is in no way exhaustive, point #3 suggests that optimizations of this routine would need to take an alternative approach to make significant improvements on the current performance. Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit. * re-add slice re-writing * [crypto] Document SimpleHashFromByteSlicesIterative
5 years ago
crypto: Proof of Concept for iterative version of SimpleHashFromByteSlices (#2611) (#3530) (#2611) had suggested that an iterative version of SimpleHashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large. Provided here is an iterative alternative, a simple test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference: ``` BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op ``` On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single `[][]byte` slices which it then re-slices at each level of the tree. The iterative version reproduces `[][]byte` once within the function and then rewrites sub-slices of that array at each level of the tree. Eexperimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance. These preliminary results suggest: 1. The performance of the current implementation is pretty good 2. Go has low overhead for recursive functions 3. The performance of the SimpleHashFromByteSlice routine is dominated by the actual hashing of data Although this work is in no way exhaustive, point #3 suggests that optimizations of this routine would need to take an alternative approach to make significant improvements on the current performance. Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit. * re-add slice re-writing * [crypto] Document SimpleHashFromByteSlicesIterative
5 years ago
crypto: Proof of Concept for iterative version of SimpleHashFromByteSlices (#2611) (#3530) (#2611) had suggested that an iterative version of SimpleHashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large. Provided here is an iterative alternative, a simple test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference: ``` BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op ``` On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single `[][]byte` slices which it then re-slices at each level of the tree. The iterative version reproduces `[][]byte` once within the function and then rewrites sub-slices of that array at each level of the tree. Eexperimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance. These preliminary results suggest: 1. The performance of the current implementation is pretty good 2. Go has low overhead for recursive functions 3. The performance of the SimpleHashFromByteSlice routine is dominated by the actual hashing of data Although this work is in no way exhaustive, point #3 suggests that optimizations of this routine would need to take an alternative approach to make significant improvements on the current performance. Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit. * re-add slice re-writing * [crypto] Document SimpleHashFromByteSlicesIterative
5 years ago
crypto: Proof of Concept for iterative version of SimpleHashFromByteSlices (#2611) (#3530) (#2611) had suggested that an iterative version of SimpleHashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large. Provided here is an iterative alternative, a simple test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference: ``` BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op ``` On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single `[][]byte` slices which it then re-slices at each level of the tree. The iterative version reproduces `[][]byte` once within the function and then rewrites sub-slices of that array at each level of the tree. Eexperimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance. These preliminary results suggest: 1. The performance of the current implementation is pretty good 2. Go has low overhead for recursive functions 3. The performance of the SimpleHashFromByteSlice routine is dominated by the actual hashing of data Although this work is in no way exhaustive, point #3 suggests that optimizations of this routine would need to take an alternative approach to make significant improvements on the current performance. Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit. * re-add slice re-writing * [crypto] Document SimpleHashFromByteSlicesIterative
5 years ago
crypto: Proof of Concept for iterative version of SimpleHashFromByteSlices (#2611) (#3530) (#2611) had suggested that an iterative version of SimpleHashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large. Provided here is an iterative alternative, a simple test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference: ``` BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op ``` On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single `[][]byte` slices which it then re-slices at each level of the tree. The iterative version reproduces `[][]byte` once within the function and then rewrites sub-slices of that array at each level of the tree. Eexperimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance. These preliminary results suggest: 1. The performance of the current implementation is pretty good 2. Go has low overhead for recursive functions 3. The performance of the SimpleHashFromByteSlice routine is dominated by the actual hashing of data Although this work is in no way exhaustive, point #3 suggests that optimizations of this routine would need to take an alternative approach to make significant improvements on the current performance. Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit. * re-add slice re-writing * [crypto] Document SimpleHashFromByteSlicesIterative
5 years ago
crypto: Proof of Concept for iterative version of SimpleHashFromByteSlices (#2611) (#3530) (#2611) had suggested that an iterative version of SimpleHashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large. Provided here is an iterative alternative, a simple test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference: ``` BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op ``` On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single `[][]byte` slices which it then re-slices at each level of the tree. The iterative version reproduces `[][]byte` once within the function and then rewrites sub-slices of that array at each level of the tree. Eexperimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance. These preliminary results suggest: 1. The performance of the current implementation is pretty good 2. Go has low overhead for recursive functions 3. The performance of the SimpleHashFromByteSlice routine is dominated by the actual hashing of data Although this work is in no way exhaustive, point #3 suggests that optimizations of this routine would need to take an alternative approach to make significant improvements on the current performance. Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit. * re-add slice re-writing * [crypto] Document SimpleHashFromByteSlicesIterative
5 years ago
crypto: Proof of Concept for iterative version of SimpleHashFromByteSlices (#2611) (#3530) (#2611) had suggested that an iterative version of SimpleHashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large. Provided here is an iterative alternative, a simple test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference: ``` BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op ``` On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single `[][]byte` slices which it then re-slices at each level of the tree. The iterative version reproduces `[][]byte` once within the function and then rewrites sub-slices of that array at each level of the tree. Eexperimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance. These preliminary results suggest: 1. The performance of the current implementation is pretty good 2. Go has low overhead for recursive functions 3. The performance of the SimpleHashFromByteSlice routine is dominated by the actual hashing of data Although this work is in no way exhaustive, point #3 suggests that optimizations of this routine would need to take an alternative approach to make significant improvements on the current performance. Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit. * re-add slice re-writing * [crypto] Document SimpleHashFromByteSlicesIterative
5 years ago
crypto: Proof of Concept for iterative version of SimpleHashFromByteSlices (#2611) (#3530) (#2611) had suggested that an iterative version of SimpleHashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large. Provided here is an iterative alternative, a simple test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference: ``` BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op ``` On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single `[][]byte` slices which it then re-slices at each level of the tree. The iterative version reproduces `[][]byte` once within the function and then rewrites sub-slices of that array at each level of the tree. Eexperimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance. These preliminary results suggest: 1. The performance of the current implementation is pretty good 2. Go has low overhead for recursive functions 3. The performance of the SimpleHashFromByteSlice routine is dominated by the actual hashing of data Although this work is in no way exhaustive, point #3 suggests that optimizations of this routine would need to take an alternative approach to make significant improvements on the current performance. Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit. * re-add slice re-writing * [crypto] Document SimpleHashFromByteSlicesIterative
5 years ago
crypto: Proof of Concept for iterative version of SimpleHashFromByteSlices (#2611) (#3530) (#2611) had suggested that an iterative version of SimpleHashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large. Provided here is an iterative alternative, a simple test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference: ``` BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op ``` On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single `[][]byte` slices which it then re-slices at each level of the tree. The iterative version reproduces `[][]byte` once within the function and then rewrites sub-slices of that array at each level of the tree. Eexperimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance. These preliminary results suggest: 1. The performance of the current implementation is pretty good 2. Go has low overhead for recursive functions 3. The performance of the SimpleHashFromByteSlice routine is dominated by the actual hashing of data Although this work is in no way exhaustive, point #3 suggests that optimizations of this routine would need to take an alternative approach to make significant improvements on the current performance. Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit. * re-add slice re-writing * [crypto] Document SimpleHashFromByteSlicesIterative
5 years ago
lint: Enable Golint (#4212) * Fix many golint errors * Fix golint errors in the 'lite' package * Don't export Pool.store * Fix typo * Revert unwanted changes * Fix errors in counter package * Fix linter errors in kvstore package * Fix linter error in example package * Fix error in tests package * Fix linter errors in v2 package * Fix linter errors in consensus package * Fix linter errors in evidence package * Fix linter error in fail package * Fix linter errors in query package * Fix linter errors in core package * Fix linter errors in node package * Fix linter errors in mempool package * Fix linter error in conn package * Fix linter errors in pex package * Rename PEXReactor export to Reactor * Fix linter errors in trust package * Fix linter errors in upnp package * Fix linter errors in p2p package * Fix linter errors in proxy package * Fix linter errors in mock_test package * Fix linter error in client_test package * Fix linter errors in coretypes package * Fix linter errors in coregrpc package * Fix linter errors in rpcserver package * Fix linter errors in rpctypes package * Fix linter errors in rpctest package * Fix linter error in json2wal script * Fix linter error in wal2json script * Fix linter errors in kv package * Fix linter error in state package * Fix linter error in grpc_client * Fix linter errors in types package * Fix linter error in version package * Fix remaining errors * Address review comments * Fix broken tests * Reconcile package coregrpc * Fix golangci bot error * Fix new golint errors * Fix broken reference * Enable golint linter * minor changes to bring golint into line * fix failing test * fix pex reactor naming * address PR comments
5 years ago
crypto: Proof of Concept for iterative version of SimpleHashFromByteSlices (#2611) (#3530) (#2611) had suggested that an iterative version of SimpleHashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large. Provided here is an iterative alternative, a simple test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference: ``` BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op ``` On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single `[][]byte` slices which it then re-slices at each level of the tree. The iterative version reproduces `[][]byte` once within the function and then rewrites sub-slices of that array at each level of the tree. Eexperimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance. These preliminary results suggest: 1. The performance of the current implementation is pretty good 2. Go has low overhead for recursive functions 3. The performance of the SimpleHashFromByteSlice routine is dominated by the actual hashing of data Although this work is in no way exhaustive, point #3 suggests that optimizations of this routine would need to take an alternative approach to make significant improvements on the current performance. Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit. * re-add slice re-writing * [crypto] Document SimpleHashFromByteSlicesIterative
5 years ago
lint: Enable Golint (#4212) * Fix many golint errors * Fix golint errors in the 'lite' package * Don't export Pool.store * Fix typo * Revert unwanted changes * Fix errors in counter package * Fix linter errors in kvstore package * Fix linter error in example package * Fix error in tests package * Fix linter errors in v2 package * Fix linter errors in consensus package * Fix linter errors in evidence package * Fix linter error in fail package * Fix linter errors in query package * Fix linter errors in core package * Fix linter errors in node package * Fix linter errors in mempool package * Fix linter error in conn package * Fix linter errors in pex package * Rename PEXReactor export to Reactor * Fix linter errors in trust package * Fix linter errors in upnp package * Fix linter errors in p2p package * Fix linter errors in proxy package * Fix linter errors in mock_test package * Fix linter error in client_test package * Fix linter errors in coretypes package * Fix linter errors in coregrpc package * Fix linter errors in rpcserver package * Fix linter errors in rpctypes package * Fix linter errors in rpctest package * Fix linter error in json2wal script * Fix linter error in wal2json script * Fix linter errors in kv package * Fix linter error in state package * Fix linter error in grpc_client * Fix linter errors in types package * Fix linter error in version package * Fix remaining errors * Address review comments * Fix broken tests * Reconcile package coregrpc * Fix golangci bot error * Fix new golint errors * Fix broken reference * Enable golint linter * minor changes to bring golint into line * fix failing test * fix pex reactor naming * address PR comments
5 years ago
crypto: Proof of Concept for iterative version of SimpleHashFromByteSlices (#2611) (#3530) (#2611) had suggested that an iterative version of SimpleHashFromByteSlice would be faster, presumably because we can envision some overhead accumulating from stack frames and function calls. Additionally, a recursive algorithm risks hitting the stack limit and causing a stack overflow should the tree be too large. Provided here is an iterative alternative, a simple test to assert correctness and a benchmark. On the performance side, there appears to be no overall difference: ``` BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op ``` On the surface it might seem that the additional overhead is due to the different allocation patterns of the implementations. The recursive version uses a single `[][]byte` slices which it then re-slices at each level of the tree. The iterative version reproduces `[][]byte` once within the function and then rewrites sub-slices of that array at each level of the tree. Eexperimenting by modifying the code to simply calculate the hash and not store the result show little to no difference in performance. These preliminary results suggest: 1. The performance of the current implementation is pretty good 2. Go has low overhead for recursive functions 3. The performance of the SimpleHashFromByteSlice routine is dominated by the actual hashing of data Although this work is in no way exhaustive, point #3 suggests that optimizations of this routine would need to take an alternative approach to make significant improvements on the current performance. Finally, considering that the recursive implementation is easier to read, it might not be worthwhile to switch to a less intuitive implementation for so little benefit. * re-add slice re-writing * [crypto] Document SimpleHashFromByteSlicesIterative
5 years ago
  1. package merkle
  2. import (
  3. "crypto/sha256"
  4. "hash"
  5. "math/bits"
  6. )
  7. // HashFromByteSlices computes a Merkle tree where the leaves are the byte slice,
  8. // in the provided order. It follows RFC-6962.
  9. func HashFromByteSlices(items [][]byte) []byte {
  10. return hashFromByteSlices(sha256.New(), items)
  11. }
  12. func hashFromByteSlices(sha hash.Hash, items [][]byte) []byte {
  13. switch len(items) {
  14. case 0:
  15. return emptyHash()
  16. case 1:
  17. return leafHashOpt(sha, items[0])
  18. default:
  19. k := getSplitPoint(int64(len(items)))
  20. left := hashFromByteSlices(sha, items[:k])
  21. right := hashFromByteSlices(sha, items[k:])
  22. return innerHashOpt(sha, left, right)
  23. }
  24. }
  25. // HashFromByteSliceIterative is an iterative alternative to
  26. // HashFromByteSlice motivated by potential performance improvements.
  27. // (#2611) had suggested that an iterative version of
  28. // HashFromByteSlice would be faster, presumably because
  29. // we can envision some overhead accumulating from stack
  30. // frames and function calls. Additionally, a recursive algorithm risks
  31. // hitting the stack limit and causing a stack overflow should the tree
  32. // be too large.
  33. //
  34. // Provided here is an iterative alternative, a test to assert
  35. // correctness and a benchmark. On the performance side, there appears to
  36. // be no overall difference:
  37. //
  38. // BenchmarkHashAlternatives/recursive-4 20000 77677 ns/op
  39. // BenchmarkHashAlternatives/iterative-4 20000 76802 ns/op
  40. //
  41. // On the surface it might seem that the additional overhead is due to
  42. // the different allocation patterns of the implementations. The recursive
  43. // version uses a single [][]byte slices which it then re-slices at each level of the tree.
  44. // The iterative version reproduces [][]byte once within the function and
  45. // then rewrites sub-slices of that array at each level of the tree.
  46. //
  47. // Experimenting by modifying the code to simply calculate the
  48. // hash and not store the result show little to no difference in performance.
  49. //
  50. // These preliminary results suggest:
  51. //
  52. // 1. The performance of the HashFromByteSlice is pretty good
  53. // 2. Go has low overhead for recursive functions
  54. // 3. The performance of the HashFromByteSlice routine is dominated
  55. // by the actual hashing of data
  56. //
  57. // Although this work is in no way exhaustive, point #3 suggests that
  58. // optimization of this routine would need to take an alternative
  59. // approach to make significant improvements on the current performance.
  60. //
  61. // Finally, considering that the recursive implementation is easier to
  62. // read, it might not be worthwhile to switch to a less intuitive
  63. // implementation for so little benefit.
  64. func HashFromByteSlicesIterative(input [][]byte) []byte {
  65. items := make([][]byte, len(input))
  66. sha := sha256.New()
  67. for i, leaf := range input {
  68. items[i] = leafHash(leaf)
  69. }
  70. size := len(items)
  71. for {
  72. switch size {
  73. case 0:
  74. return emptyHash()
  75. case 1:
  76. return items[0]
  77. default:
  78. rp := 0 // read position
  79. wp := 0 // write position
  80. for rp < size {
  81. if rp+1 < size {
  82. items[wp] = innerHashOpt(sha, items[rp], items[rp+1])
  83. rp += 2
  84. } else {
  85. items[wp] = items[rp]
  86. rp++
  87. }
  88. wp++
  89. }
  90. size = wp
  91. }
  92. }
  93. }
  94. // getSplitPoint returns the largest power of 2 less than length
  95. func getSplitPoint(length int64) int64 {
  96. if length < 1 {
  97. panic("Trying to split a tree with size < 1")
  98. }
  99. uLength := uint(length)
  100. bitlen := bits.Len(uLength)
  101. k := int64(1 << uint(bitlen-1))
  102. if k == length {
  103. k >>= 1
  104. }
  105. return k
  106. }