You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

947 lines
25 KiB

p2p: seed mode refactoring (#3011) ListOfKnownAddresses is removed panic if addrbook size is less than zero CrawlPeers does not attempt to connect to existing or peers we're currently dialing various perf. fixes improved tests (though not complete) move IsDialingOrExistingAddress check into DialPeerWithAddress (Fixes #2716) * addrbook: preallocate memory when saving addrbook to file * addrbook: remove oldestFirst struct and check for ID * oldestFirst replaced with sort.Slice * ID is now mandatory, so no need to check * addrbook: remove ListOfKnownAddresses GetSelection is used instead in seed mode. * addrbook: panic if size is less than 0 * rewrite addrbook#saveToFile to not use a counter * test AttemptDisconnects func * move IsDialingOrExistingAddress check into DialPeerWithAddress * save and cleanup crawl peer data * get rid of DefaultSeedDisconnectWaitPeriod * make linter happy * fix TestPEXReactorSeedMode * fix comment * add a changelog entry * Apply suggestions from code review Co-Authored-By: melekes <anton.kalyaev@gmail.com> * rename ErrDialingOrExistingAddress to ErrCurrentlyDialingOrExistingAddress * lowercase errors * do not persist seed data pros: - no extra files - less IO cons: - if the node crashes, seed might crawl a peer too soon * fixes after Ethan's review * add a changelog entry * we should only consult Switch about peers checking addrbook size does not make sense since only PEX reactor uses it for dialing peers! https://github.com/tendermint/tendermint/pull/3011#discussion_r270948875
6 years ago
abci: localClient improvements & bugfixes & pubsub Unsubscribe issues (#2748) * use READ lock/unlock in ConsensusState#GetLastHeight Refs #2721 * do not use defers when there's no need * fix peer formatting (output its address instead of the pointer) ``` [54310]: E[11-02|11:59:39.851] Connection failed @ sendRoutine module=p2p peer=0xb78f00 conn=MConn{74.207.236.148:26656} err="pong timeout" ``` https://github.com/tendermint/tendermint/issues/2721#issuecomment-435326581 * panic if peer has no state https://github.com/tendermint/tendermint/issues/2721#issuecomment-435347165 It's confusing that sometimes we check if peer has a state, but most of the times we expect it to be there 1. https://github.com/tendermint/tendermint/blob/add79700b5fe84417538202b6c927c8cc5383672/mempool/reactor.go#L138 2. https://github.com/tendermint/tendermint/blob/add79700b5fe84417538202b6c927c8cc5383672/rpc/core/consensus.go#L196 (edited) I will change everything to always assume peer has a state and panic otherwise that should help identify issues earlier * abci/localclient: extend lock on app callback App callback should be protected by lock as well (note this was already done for InitChainAsync, why not for others???). Otherwise, when we execute the block, tx might come in and call the callback in the same time we're updating it in execBlockOnProxyApp => DATA RACE Fixes #2721 Consensus state is locked ``` goroutine 113333 [semacquire, 309 minutes]: sync.runtime_SemacquireMutex(0xc00180009c, 0xc0000c7e00) /usr/local/go/src/runtime/sema.go:71 +0x3d sync.(*RWMutex).RLock(0xc001800090) /usr/local/go/src/sync/rwmutex.go:50 +0x4e github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).GetRoundState(0xc001800000, 0x0) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:218 +0x46 github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusReactor).queryMaj23Routine(0xc0017def80, 0x11104a0, 0xc0072488f0, 0xc007248 9c0) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/reactor.go:735 +0x16d created by github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusReactor).AddPeer /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/reactor.go:172 +0x236 ``` because localClient is locked ``` goroutine 1899 [semacquire, 309 minutes]: sync.runtime_SemacquireMutex(0xc00003363c, 0xc0000cb500) /usr/local/go/src/runtime/sema.go:71 +0x3d sync.(*Mutex).Lock(0xc000033638) /usr/local/go/src/sync/mutex.go:134 +0xff github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/abci/client.(*localClient).SetResponseCallback(0xc0001fb560, 0xc007868540) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/abci/client/local_client.go:32 +0x33 github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/proxy.(*appConnConsensus).SetResponseCallback(0xc00002f750, 0xc007868540) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/proxy/app_conn.go:57 +0x40 github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/state.execBlockOnProxyApp(0x1104e20, 0xc002ca0ba0, 0x11092a0, 0xc00002f750, 0xc0001fe960, 0xc000bfc660, 0x110cfe0, 0xc000090330, 0xc9d12, 0xc000d9d5a0, ...) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/state/execution.go:230 +0x1fd github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/state.(*BlockExecutor).ApplyBlock(0xc002c2a230, 0x7, 0x0, 0xc000eae880, 0x6, 0xc002e52c60, 0x16, 0x1f927, 0xc9d12, 0xc000d9d5a0, ...) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/state/execution.go:96 +0x142 github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).finalizeCommit(0xc001800000, 0x1f928) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1339 +0xa3e github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).tryFinalizeCommit(0xc001800000, 0x1f928) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1270 +0x451 github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).enterCommit.func1(0xc001800000, 0x0, 0x1f928) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1218 +0x90 github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).enterCommit(0xc001800000, 0x1f928, 0x0) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1247 +0x6b8 github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).addVote(0xc001800000, 0xc003d8dea0, 0xc000cf4cc0, 0x28, 0xf1, 0xc003bc7ad0, 0xc003bc7b10) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1659 +0xbad github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).tryAddVote(0xc001800000, 0xc003d8dea0, 0xc000cf4cc0, 0x28, 0xf1, 0xf1, 0xf1) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1517 +0x59 github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).handleMsg(0xc001800000, 0xd98200, 0xc0070dbed0, 0xc000cf4cc0, 0x28) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:660 +0x64b github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine(0xc001800000, 0x0) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:617 +0x670 created by github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).OnStart /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:311 +0x132 ``` tx comes in and CheckTx is executed right when we execute the block ``` goroutine 111044 [semacquire, 309 minutes]: sync.runtime_SemacquireMutex(0xc00003363c, 0x0) /usr/local/go/src/runtime/sema.go:71 +0x3d sync.(*Mutex).Lock(0xc000033638) /usr/local/go/src/sync/mutex.go:134 +0xff github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/abci/client.(*localClient).CheckTxAsync(0xc0001fb0e0, 0xc002d94500, 0x13f, 0x280, 0x0) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/abci/client/local_client.go:85 +0x47 github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/proxy.(*appConnMempool).CheckTxAsync(0xc00002f720, 0xc002d94500, 0x13f, 0x280, 0x1) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/proxy/app_conn.go:114 +0x51 github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/mempool.(*Mempool).CheckTx(0xc002d3a320, 0xc002d94500, 0x13f, 0x280, 0xc0072355f0, 0x0, 0x0) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/mempool/mempool.go:316 +0x17b github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/core.BroadcastTxSync(0xc002d94500, 0x13f, 0x280, 0x0, 0x0, 0x0) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/core/mempool.go:93 +0xb8 reflect.Value.call(0xd85560, 0x10326c0, 0x13, 0xec7b8b, 0x4, 0xc00663f180, 0x1, 0x1, 0xc00663f180, 0xc00663f188, ...) /usr/local/go/src/reflect/value.go:447 +0x449 reflect.Value.Call(0xd85560, 0x10326c0, 0x13, 0xc00663f180, 0x1, 0x1, 0x0, 0x0, 0xc005cc9344) /usr/local/go/src/reflect/value.go:308 +0xa4 github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server.makeHTTPHandler.func2(0x1102060, 0xc00663f100, 0xc0082d7900) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server/handlers.go:269 +0x188 net/http.HandlerFunc.ServeHTTP(0xc002c81f20, 0x1102060, 0xc00663f100, 0xc0082d7900) /usr/local/go/src/net/http/server.go:1964 +0x44 net/http.(*ServeMux).ServeHTTP(0xc002c81b60, 0x1102060, 0xc00663f100, 0xc0082d7900) /usr/local/go/src/net/http/server.go:2361 +0x127 github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server.maxBytesHandler.ServeHTTP(0x10f8a40, 0xc002c81b60, 0xf4240, 0x1102060, 0xc00663f100, 0xc0082d7900) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server/http_server.go:219 +0xcf github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server.RecoverAndLogHandler.func1(0x1103220, 0xc00121e620, 0xc0082d7900) /root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server/http_server.go:192 +0x394 net/http.HandlerFunc.ServeHTTP(0xc002c06ea0, 0x1103220, 0xc00121e620, 0xc0082d7900) /usr/local/go/src/net/http/server.go:1964 +0x44 net/http.serverHandler.ServeHTTP(0xc001a1aa90, 0x1103220, 0xc00121e620, 0xc0082d7900) /usr/local/go/src/net/http/server.go:2741 +0xab net/http.(*conn).serve(0xc00785a3c0, 0x11041a0, 0xc000f844c0) /usr/local/go/src/net/http/server.go:1847 +0x646 created by net/http.(*Server).Serve /usr/local/go/src/net/http/server.go:2851 +0x2f5 ``` * consensus: use read lock in Receive#VoteMessage * use defer to unlock mutex because application might panic * use defer in every method of the localClient * add a changelog entry * drain channels before Unsubscribe(All) Read https://github.com/tendermint/tendermint/blob/55362ed76630f3e1ebec159a598f6a9fb5892cb1/libs/pubsub/pubsub.go#L13 for the detailed explanation of the issue. We'll need to fix it someday. Make sure to keep an eye on https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-033-pubsub.md * retry instead of panic when peer has no state in reactors other than consensus in /dump_consensus_state RPC endpoint, skip a peer with no state * rpc/core/mempool: simplify error messages * rpc/core/mempool: use time.After instead of timer also, do not log DeliverTx result (to be consistent with other memthods) * unlock before calling the callback in reqRes#SetCallback
6 years ago
p2p: seed mode refactoring (#3011) ListOfKnownAddresses is removed panic if addrbook size is less than zero CrawlPeers does not attempt to connect to existing or peers we're currently dialing various perf. fixes improved tests (though not complete) move IsDialingOrExistingAddress check into DialPeerWithAddress (Fixes #2716) * addrbook: preallocate memory when saving addrbook to file * addrbook: remove oldestFirst struct and check for ID * oldestFirst replaced with sort.Slice * ID is now mandatory, so no need to check * addrbook: remove ListOfKnownAddresses GetSelection is used instead in seed mode. * addrbook: panic if size is less than 0 * rewrite addrbook#saveToFile to not use a counter * test AttemptDisconnects func * move IsDialingOrExistingAddress check into DialPeerWithAddress * save and cleanup crawl peer data * get rid of DefaultSeedDisconnectWaitPeriod * make linter happy * fix TestPEXReactorSeedMode * fix comment * add a changelog entry * Apply suggestions from code review Co-Authored-By: melekes <anton.kalyaev@gmail.com> * rename ErrDialingOrExistingAddress to ErrCurrentlyDialingOrExistingAddress * lowercase errors * do not persist seed data pros: - no extra files - less IO cons: - if the node crashes, seed might crawl a peer too soon * fixes after Ethan's review * add a changelog entry * we should only consult Switch about peers checking addrbook size does not make sense since only PEX reactor uses it for dialing peers! https://github.com/tendermint/tendermint/pull/3011#discussion_r270948875
6 years ago
p2p: seed mode refactoring (#3011) ListOfKnownAddresses is removed panic if addrbook size is less than zero CrawlPeers does not attempt to connect to existing or peers we're currently dialing various perf. fixes improved tests (though not complete) move IsDialingOrExistingAddress check into DialPeerWithAddress (Fixes #2716) * addrbook: preallocate memory when saving addrbook to file * addrbook: remove oldestFirst struct and check for ID * oldestFirst replaced with sort.Slice * ID is now mandatory, so no need to check * addrbook: remove ListOfKnownAddresses GetSelection is used instead in seed mode. * addrbook: panic if size is less than 0 * rewrite addrbook#saveToFile to not use a counter * test AttemptDisconnects func * move IsDialingOrExistingAddress check into DialPeerWithAddress * save and cleanup crawl peer data * get rid of DefaultSeedDisconnectWaitPeriod * make linter happy * fix TestPEXReactorSeedMode * fix comment * add a changelog entry * Apply suggestions from code review Co-Authored-By: melekes <anton.kalyaev@gmail.com> * rename ErrDialingOrExistingAddress to ErrCurrentlyDialingOrExistingAddress * lowercase errors * do not persist seed data pros: - no extra files - less IO cons: - if the node crashes, seed might crawl a peer too soon * fixes after Ethan's review * add a changelog entry * we should only consult Switch about peers checking addrbook size does not make sense since only PEX reactor uses it for dialing peers! https://github.com/tendermint/tendermint/pull/3011#discussion_r270948875
6 years ago
p2p: fix infinite loop in addrbook (#3232) * failing test * fix infinite loop in addrbook There are cases where we only have a small number of addresses marked good ("old"), but the selection mechanism keeps trying to select more of these addresses, and hence ends up in an infinite loop. Here we fix this to only try and select such "old" addresses if we have enough of them. Note this means, if we don't have enough of them, we may return more "new" addresses than otherwise expected by the newSelectionBias. This whole GetSelectionWithBias method probably needs to be rewritten, but this is a quick fix for the issue. * changelog * fix infinite loop if not enough new addrs * fix another potential infinite loop if a.nNew == 0 -> pickFromOldBucket=true, but we don't have enough items (a.nOld > len(oldBucketToAddrsMap) false) * Revert "fix another potential infinite loop" This reverts commit 146540c1125597162bd89820d611f6531f5e5e4b. * check num addresses instead of buckets, new test * fixed the int division * add slack to bias % in test, lint fixes * Added checks for selection content in test * test cleanup * Apply suggestions from code review Co-Authored-By: ebuchman <ethan@coinculture.info> * address review comments * change after Anton's review comments * use the same docker image we use for testing when building a binary for localnet * switch back to circleci classic * more review comments * more review comments * refactor addrbook_test * build linux binary inside docker in attempt to fix ``` --> Running dep + make build-linux GOOS=linux GOARCH=amd64 make build make[1]: Entering directory `/home/circleci/.go_workspace/src/github.com/tendermint/tendermint' CGO_ENABLED=0 go build -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`" -tags 'tendermint' -o build/tendermint ./cmd/tendermint/ p2p/pex/addrbook.go:373:13: undefined: math.Round ``` * change dir from /usr to /go * use concrete Go version for localnet binary * check for nil addresses just to be sure
6 years ago
p2p: seed mode refactoring (#3011) ListOfKnownAddresses is removed panic if addrbook size is less than zero CrawlPeers does not attempt to connect to existing or peers we're currently dialing various perf. fixes improved tests (though not complete) move IsDialingOrExistingAddress check into DialPeerWithAddress (Fixes #2716) * addrbook: preallocate memory when saving addrbook to file * addrbook: remove oldestFirst struct and check for ID * oldestFirst replaced with sort.Slice * ID is now mandatory, so no need to check * addrbook: remove ListOfKnownAddresses GetSelection is used instead in seed mode. * addrbook: panic if size is less than 0 * rewrite addrbook#saveToFile to not use a counter * test AttemptDisconnects func * move IsDialingOrExistingAddress check into DialPeerWithAddress * save and cleanup crawl peer data * get rid of DefaultSeedDisconnectWaitPeriod * make linter happy * fix TestPEXReactorSeedMode * fix comment * add a changelog entry * Apply suggestions from code review Co-Authored-By: melekes <anton.kalyaev@gmail.com> * rename ErrDialingOrExistingAddress to ErrCurrentlyDialingOrExistingAddress * lowercase errors * do not persist seed data pros: - no extra files - less IO cons: - if the node crashes, seed might crawl a peer too soon * fixes after Ethan's review * add a changelog entry * we should only consult Switch about peers checking addrbook size does not make sense since only PEX reactor uses it for dialing peers! https://github.com/tendermint/tendermint/pull/3011#discussion_r270948875
6 years ago
p2p: fix infinite loop in addrbook (#3232) * failing test * fix infinite loop in addrbook There are cases where we only have a small number of addresses marked good ("old"), but the selection mechanism keeps trying to select more of these addresses, and hence ends up in an infinite loop. Here we fix this to only try and select such "old" addresses if we have enough of them. Note this means, if we don't have enough of them, we may return more "new" addresses than otherwise expected by the newSelectionBias. This whole GetSelectionWithBias method probably needs to be rewritten, but this is a quick fix for the issue. * changelog * fix infinite loop if not enough new addrs * fix another potential infinite loop if a.nNew == 0 -> pickFromOldBucket=true, but we don't have enough items (a.nOld > len(oldBucketToAddrsMap) false) * Revert "fix another potential infinite loop" This reverts commit 146540c1125597162bd89820d611f6531f5e5e4b. * check num addresses instead of buckets, new test * fixed the int division * add slack to bias % in test, lint fixes * Added checks for selection content in test * test cleanup * Apply suggestions from code review Co-Authored-By: ebuchman <ethan@coinculture.info> * address review comments * change after Anton's review comments * use the same docker image we use for testing when building a binary for localnet * switch back to circleci classic * more review comments * more review comments * refactor addrbook_test * build linux binary inside docker in attempt to fix ``` --> Running dep + make build-linux GOOS=linux GOARCH=amd64 make build make[1]: Entering directory `/home/circleci/.go_workspace/src/github.com/tendermint/tendermint' CGO_ENABLED=0 go build -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`" -tags 'tendermint' -o build/tendermint ./cmd/tendermint/ p2p/pex/addrbook.go:373:13: undefined: math.Round ``` * change dir from /usr to /go * use concrete Go version for localnet binary * check for nil addresses just to be sure
6 years ago
p2p: seed mode refactoring (#3011) ListOfKnownAddresses is removed panic if addrbook size is less than zero CrawlPeers does not attempt to connect to existing or peers we're currently dialing various perf. fixes improved tests (though not complete) move IsDialingOrExistingAddress check into DialPeerWithAddress (Fixes #2716) * addrbook: preallocate memory when saving addrbook to file * addrbook: remove oldestFirst struct and check for ID * oldestFirst replaced with sort.Slice * ID is now mandatory, so no need to check * addrbook: remove ListOfKnownAddresses GetSelection is used instead in seed mode. * addrbook: panic if size is less than 0 * rewrite addrbook#saveToFile to not use a counter * test AttemptDisconnects func * move IsDialingOrExistingAddress check into DialPeerWithAddress * save and cleanup crawl peer data * get rid of DefaultSeedDisconnectWaitPeriod * make linter happy * fix TestPEXReactorSeedMode * fix comment * add a changelog entry * Apply suggestions from code review Co-Authored-By: melekes <anton.kalyaev@gmail.com> * rename ErrDialingOrExistingAddress to ErrCurrentlyDialingOrExistingAddress * lowercase errors * do not persist seed data pros: - no extra files - less IO cons: - if the node crashes, seed might crawl a peer too soon * fixes after Ethan's review * add a changelog entry * we should only consult Switch about peers checking addrbook size does not make sense since only PEX reactor uses it for dialing peers! https://github.com/tendermint/tendermint/pull/3011#discussion_r270948875
6 years ago
  1. // Modified for Tendermint
  2. // Originally Copyright (c) 2013-2014 Conformal Systems LLC.
  3. // https://github.com/conformal/btcd/blob/master/LICENSE
  4. package pex
  5. import (
  6. "encoding/binary"
  7. "fmt"
  8. "hash"
  9. "math"
  10. mrand "math/rand"
  11. "net"
  12. "sync"
  13. "time"
  14. "github.com/minio/highwayhash"
  15. "github.com/tendermint/tendermint/crypto"
  16. tmsync "github.com/tendermint/tendermint/internal/libs/sync"
  17. "github.com/tendermint/tendermint/internal/p2p"
  18. tmmath "github.com/tendermint/tendermint/libs/math"
  19. tmrand "github.com/tendermint/tendermint/libs/rand"
  20. "github.com/tendermint/tendermint/libs/service"
  21. )
  22. const (
  23. bucketTypeNew = 0x01
  24. bucketTypeOld = 0x02
  25. )
  26. // AddrBook is an address book used for tracking peers
  27. // so we can gossip about them to others and select
  28. // peers to dial.
  29. // TODO: break this up?
  30. type AddrBook interface {
  31. service.Service
  32. // Add our own addresses so we don't later add ourselves
  33. AddOurAddress(*p2p.NetAddress)
  34. // Check if it is our address
  35. OurAddress(*p2p.NetAddress) bool
  36. AddPrivateIDs([]string)
  37. // Add and remove an address
  38. AddAddress(addr *p2p.NetAddress, src *p2p.NetAddress) error
  39. RemoveAddress(*p2p.NetAddress)
  40. // Check if the address is in the book
  41. HasAddress(*p2p.NetAddress) bool
  42. // Do we need more peers?
  43. NeedMoreAddrs() bool
  44. // Is Address Book Empty? Answer should not depend on being in your own
  45. // address book, or private peers
  46. Empty() bool
  47. // Pick an address to dial
  48. PickAddress(biasTowardsNewAddrs int) *p2p.NetAddress
  49. // Mark address
  50. MarkGood(p2p.NodeID)
  51. MarkAttempt(*p2p.NetAddress)
  52. MarkBad(*p2p.NetAddress, time.Duration) // Move peer to bad peers list
  53. // Add bad peers back to addrBook
  54. ReinstateBadPeers()
  55. IsGood(*p2p.NetAddress) bool
  56. IsBanned(*p2p.NetAddress) bool
  57. // Send a selection of addresses to peers
  58. GetSelection() []*p2p.NetAddress
  59. // Send a selection of addresses with bias
  60. GetSelectionWithBias(biasTowardsNewAddrs int) []*p2p.NetAddress
  61. Size() int
  62. // Persist to disk
  63. Save()
  64. }
  65. var _ AddrBook = (*addrBook)(nil)
  66. // addrBook - concurrency safe peer address manager.
  67. // Implements AddrBook.
  68. type addrBook struct {
  69. service.BaseService
  70. // accessed concurrently
  71. mtx tmsync.Mutex
  72. ourAddrs map[string]struct{}
  73. privateIDs map[p2p.NodeID]struct{}
  74. addrLookup map[p2p.NodeID]*knownAddress // new & old
  75. badPeers map[p2p.NodeID]*knownAddress // blacklisted peers
  76. bucketsOld []map[string]*knownAddress
  77. bucketsNew []map[string]*knownAddress
  78. nOld int
  79. nNew int
  80. // immutable after creation
  81. filePath string
  82. key string // random prefix for bucket placement
  83. routabilityStrict bool
  84. hasher hash.Hash64
  85. wg sync.WaitGroup
  86. }
  87. func mustNewHasher() hash.Hash64 {
  88. key := crypto.CRandBytes(highwayhash.Size)
  89. hasher, err := highwayhash.New64(key)
  90. if err != nil {
  91. panic(err)
  92. }
  93. return hasher
  94. }
  95. // NewAddrBook creates a new address book.
  96. // Use Start to begin processing asynchronous address updates.
  97. func NewAddrBook(filePath string, routabilityStrict bool) AddrBook {
  98. am := &addrBook{
  99. ourAddrs: make(map[string]struct{}),
  100. privateIDs: make(map[p2p.NodeID]struct{}),
  101. addrLookup: make(map[p2p.NodeID]*knownAddress),
  102. badPeers: make(map[p2p.NodeID]*knownAddress),
  103. filePath: filePath,
  104. routabilityStrict: routabilityStrict,
  105. }
  106. am.init()
  107. am.BaseService = *service.NewBaseService(nil, "AddrBook", am)
  108. return am
  109. }
  110. // Initialize the buckets.
  111. // When modifying this, don't forget to update loadFromFile()
  112. func (a *addrBook) init() {
  113. a.key = crypto.CRandHex(24) // 24/2 * 8 = 96 bits
  114. // New addr buckets
  115. a.bucketsNew = make([]map[string]*knownAddress, newBucketCount)
  116. for i := range a.bucketsNew {
  117. a.bucketsNew[i] = make(map[string]*knownAddress)
  118. }
  119. // Old addr buckets
  120. a.bucketsOld = make([]map[string]*knownAddress, oldBucketCount)
  121. for i := range a.bucketsOld {
  122. a.bucketsOld[i] = make(map[string]*knownAddress)
  123. }
  124. a.hasher = mustNewHasher()
  125. }
  126. // OnStart implements Service.
  127. func (a *addrBook) OnStart() error {
  128. if err := a.BaseService.OnStart(); err != nil {
  129. return err
  130. }
  131. a.loadFromFile(a.filePath)
  132. // wg.Add to ensure that any invocation of .Wait()
  133. // later on will wait for saveRoutine to terminate.
  134. a.wg.Add(1)
  135. go a.saveRoutine()
  136. return nil
  137. }
  138. // OnStop implements Service.
  139. func (a *addrBook) OnStop() {
  140. a.BaseService.OnStop()
  141. }
  142. func (a *addrBook) Wait() {
  143. a.wg.Wait()
  144. }
  145. func (a *addrBook) FilePath() string {
  146. return a.filePath
  147. }
  148. //-------------------------------------------------------
  149. // AddOurAddress one of our addresses.
  150. func (a *addrBook) AddOurAddress(addr *p2p.NetAddress) {
  151. a.mtx.Lock()
  152. defer a.mtx.Unlock()
  153. a.Logger.Info("Add our address to book", "addr", addr)
  154. a.ourAddrs[addr.String()] = struct{}{}
  155. }
  156. // OurAddress returns true if it is our address.
  157. func (a *addrBook) OurAddress(addr *p2p.NetAddress) bool {
  158. a.mtx.Lock()
  159. defer a.mtx.Unlock()
  160. _, ok := a.ourAddrs[addr.String()]
  161. return ok
  162. }
  163. func (a *addrBook) AddPrivateIDs(ids []string) {
  164. a.mtx.Lock()
  165. defer a.mtx.Unlock()
  166. for _, id := range ids {
  167. a.privateIDs[p2p.NodeID(id)] = struct{}{}
  168. }
  169. }
  170. // AddAddress implements AddrBook
  171. // Add address to a "new" bucket. If it's already in one, only add it probabilistically.
  172. // Returns error if the addr is non-routable. Does not add self.
  173. // NOTE: addr must not be nil
  174. func (a *addrBook) AddAddress(addr *p2p.NetAddress, src *p2p.NetAddress) error {
  175. a.mtx.Lock()
  176. defer a.mtx.Unlock()
  177. return a.addAddress(addr, src)
  178. }
  179. // RemoveAddress implements AddrBook - removes the address from the book.
  180. func (a *addrBook) RemoveAddress(addr *p2p.NetAddress) {
  181. a.mtx.Lock()
  182. defer a.mtx.Unlock()
  183. a.removeAddress(addr)
  184. }
  185. // IsGood returns true if peer was ever marked as good and haven't
  186. // done anything wrong since then.
  187. func (a *addrBook) IsGood(addr *p2p.NetAddress) bool {
  188. a.mtx.Lock()
  189. defer a.mtx.Unlock()
  190. return a.addrLookup[addr.ID].isOld()
  191. }
  192. // IsBanned returns true if the peer is currently banned
  193. func (a *addrBook) IsBanned(addr *p2p.NetAddress) bool {
  194. a.mtx.Lock()
  195. _, ok := a.badPeers[addr.ID]
  196. a.mtx.Unlock()
  197. return ok
  198. }
  199. // HasAddress returns true if the address is in the book.
  200. func (a *addrBook) HasAddress(addr *p2p.NetAddress) bool {
  201. a.mtx.Lock()
  202. defer a.mtx.Unlock()
  203. ka := a.addrLookup[addr.ID]
  204. return ka != nil
  205. }
  206. // NeedMoreAddrs implements AddrBook - returns true if there are not have enough addresses in the book.
  207. func (a *addrBook) NeedMoreAddrs() bool {
  208. return a.Size() < needAddressThreshold
  209. }
  210. // Empty implements AddrBook - returns true if there are no addresses in the address book.
  211. // Does not count the peer appearing in its own address book, or private peers.
  212. func (a *addrBook) Empty() bool {
  213. return a.Size() == 0
  214. }
  215. // PickAddress implements AddrBook. It picks an address to connect to.
  216. // The address is picked randomly from an old or new bucket according
  217. // to the biasTowardsNewAddrs argument, which must be between [0, 100] (or else is truncated to that range)
  218. // and determines how biased we are to pick an address from a new bucket.
  219. // PickAddress returns nil if the AddrBook is empty or if we try to pick
  220. // from an empty bucket.
  221. // nolint:gosec // G404: Use of weak random number generator
  222. func (a *addrBook) PickAddress(biasTowardsNewAddrs int) *p2p.NetAddress {
  223. a.mtx.Lock()
  224. defer a.mtx.Unlock()
  225. bookSize := a.size()
  226. if bookSize <= 0 {
  227. if bookSize < 0 {
  228. panic(fmt.Sprintf("Addrbook size %d (new: %d + old: %d) is less than 0", a.nNew+a.nOld, a.nNew, a.nOld))
  229. }
  230. return nil
  231. }
  232. if biasTowardsNewAddrs > 100 {
  233. biasTowardsNewAddrs = 100
  234. }
  235. if biasTowardsNewAddrs < 0 {
  236. biasTowardsNewAddrs = 0
  237. }
  238. // Bias between new and old addresses.
  239. oldCorrelation := math.Sqrt(float64(a.nOld)) * (100.0 - float64(biasTowardsNewAddrs))
  240. newCorrelation := math.Sqrt(float64(a.nNew)) * float64(biasTowardsNewAddrs)
  241. // pick a random peer from a random bucket
  242. var bucket map[string]*knownAddress
  243. pickFromOldBucket := (newCorrelation+oldCorrelation)*mrand.Float64() < oldCorrelation
  244. if (pickFromOldBucket && a.nOld == 0) ||
  245. (!pickFromOldBucket && a.nNew == 0) {
  246. return nil
  247. }
  248. // loop until we pick a random non-empty bucket
  249. for len(bucket) == 0 {
  250. if pickFromOldBucket {
  251. bucket = a.bucketsOld[mrand.Intn(len(a.bucketsOld))]
  252. } else {
  253. bucket = a.bucketsNew[mrand.Intn(len(a.bucketsNew))]
  254. }
  255. }
  256. // pick a random index and loop over the map to return that index
  257. randIndex := mrand.Intn(len(bucket))
  258. for _, ka := range bucket {
  259. if randIndex == 0 {
  260. return ka.Addr
  261. }
  262. randIndex--
  263. }
  264. return nil
  265. }
  266. // MarkGood implements AddrBook - it marks the peer as good and
  267. // moves it into an "old" bucket.
  268. func (a *addrBook) MarkGood(id p2p.NodeID) {
  269. a.mtx.Lock()
  270. defer a.mtx.Unlock()
  271. ka := a.addrLookup[id]
  272. if ka == nil {
  273. return
  274. }
  275. ka.markGood()
  276. if ka.isNew() {
  277. if err := a.moveToOld(ka); err != nil {
  278. a.Logger.Error("Error moving address to old", "err", err)
  279. }
  280. }
  281. }
  282. // MarkAttempt implements AddrBook - it marks that an attempt was made to connect to the address.
  283. func (a *addrBook) MarkAttempt(addr *p2p.NetAddress) {
  284. a.mtx.Lock()
  285. defer a.mtx.Unlock()
  286. ka := a.addrLookup[addr.ID]
  287. if ka == nil {
  288. return
  289. }
  290. ka.markAttempt()
  291. }
  292. // MarkBad implements AddrBook. Kicks address out from book, places
  293. // the address in the badPeers pool.
  294. func (a *addrBook) MarkBad(addr *p2p.NetAddress, banTime time.Duration) {
  295. a.mtx.Lock()
  296. defer a.mtx.Unlock()
  297. if a.addBadPeer(addr, banTime) {
  298. a.removeAddress(addr)
  299. }
  300. }
  301. // ReinstateBadPeers removes bad peers from ban list and places them into a new
  302. // bucket.
  303. func (a *addrBook) ReinstateBadPeers() {
  304. a.mtx.Lock()
  305. defer a.mtx.Unlock()
  306. for _, ka := range a.badPeers {
  307. if ka.isBanned() {
  308. continue
  309. }
  310. bucket, err := a.calcNewBucket(ka.Addr, ka.Src)
  311. if err != nil {
  312. a.Logger.Error("Failed to calculate new bucket (bad peer won't be reinstantiated)",
  313. "addr", ka.Addr, "err", err)
  314. continue
  315. }
  316. if err := a.addToNewBucket(ka, bucket); err != nil {
  317. a.Logger.Error("Error adding peer to new bucket", "err", err)
  318. }
  319. delete(a.badPeers, ka.ID())
  320. a.Logger.Info("Reinstated address", "addr", ka.Addr)
  321. }
  322. }
  323. // GetSelection implements AddrBook.
  324. // It randomly selects some addresses (old & new). Suitable for peer-exchange protocols.
  325. // Must never return a nil address.
  326. func (a *addrBook) GetSelection() []*p2p.NetAddress {
  327. a.mtx.Lock()
  328. defer a.mtx.Unlock()
  329. bookSize := a.size()
  330. if bookSize <= 0 {
  331. if bookSize < 0 {
  332. panic(fmt.Sprintf("Addrbook size %d (new: %d + old: %d) is less than 0", a.nNew+a.nOld, a.nNew, a.nOld))
  333. }
  334. return nil
  335. }
  336. numAddresses := tmmath.MaxInt(
  337. tmmath.MinInt(minGetSelection, bookSize),
  338. bookSize*getSelectionPercent/100)
  339. numAddresses = tmmath.MinInt(maxGetSelection, numAddresses)
  340. // XXX: instead of making a list of all addresses, shuffling, and slicing a random chunk,
  341. // could we just select a random numAddresses of indexes?
  342. allAddr := make([]*p2p.NetAddress, bookSize)
  343. i := 0
  344. for _, ka := range a.addrLookup {
  345. allAddr[i] = ka.Addr
  346. i++
  347. }
  348. // Fisher-Yates shuffle the array. We only need to do the first
  349. // `numAddresses' since we are throwing the rest.
  350. for i := 0; i < numAddresses; i++ {
  351. // pick a number between current index and the end
  352. // nolint:gosec // G404: Use of weak random number generator
  353. j := mrand.Intn(len(allAddr)-i) + i
  354. allAddr[i], allAddr[j] = allAddr[j], allAddr[i]
  355. }
  356. // slice off the limit we are willing to share.
  357. return allAddr[:numAddresses]
  358. }
  359. func percentageOfNum(p, n int) int {
  360. return int(math.Round((float64(p) / float64(100)) * float64(n)))
  361. }
  362. // GetSelectionWithBias implements AddrBook.
  363. // It randomly selects some addresses (old & new). Suitable for peer-exchange protocols.
  364. // Must never return a nil address.
  365. //
  366. // Each address is picked randomly from an old or new bucket according to the
  367. // biasTowardsNewAddrs argument, which must be between [0, 100] (or else is truncated to
  368. // that range) and determines how biased we are to pick an address from a new
  369. // bucket.
  370. func (a *addrBook) GetSelectionWithBias(biasTowardsNewAddrs int) []*p2p.NetAddress {
  371. a.mtx.Lock()
  372. defer a.mtx.Unlock()
  373. bookSize := a.size()
  374. if bookSize <= 0 {
  375. if bookSize < 0 {
  376. panic(fmt.Sprintf("Addrbook size %d (new: %d + old: %d) is less than 0", a.nNew+a.nOld, a.nNew, a.nOld))
  377. }
  378. return nil
  379. }
  380. if biasTowardsNewAddrs > 100 {
  381. biasTowardsNewAddrs = 100
  382. }
  383. if biasTowardsNewAddrs < 0 {
  384. biasTowardsNewAddrs = 0
  385. }
  386. numAddresses := tmmath.MaxInt(
  387. tmmath.MinInt(minGetSelection, bookSize),
  388. bookSize*getSelectionPercent/100)
  389. numAddresses = tmmath.MinInt(maxGetSelection, numAddresses)
  390. // number of new addresses that, if possible, should be in the beginning of the selection
  391. // if there are no enough old addrs, will choose new addr instead.
  392. numRequiredNewAdd := tmmath.MaxInt(percentageOfNum(biasTowardsNewAddrs, numAddresses), numAddresses-a.nOld)
  393. selection := a.randomPickAddresses(bucketTypeNew, numRequiredNewAdd)
  394. selection = append(selection, a.randomPickAddresses(bucketTypeOld, numAddresses-len(selection))...)
  395. return selection
  396. }
  397. //------------------------------------------------
  398. // Size returns the number of addresses in the book.
  399. func (a *addrBook) Size() int {
  400. a.mtx.Lock()
  401. defer a.mtx.Unlock()
  402. return a.size()
  403. }
  404. func (a *addrBook) size() int {
  405. return a.nNew + a.nOld
  406. }
  407. //----------------------------------------------------------
  408. // Save persists the address book to disk.
  409. func (a *addrBook) Save() {
  410. a.saveToFile(a.filePath) // thread safe
  411. }
  412. func (a *addrBook) saveRoutine() {
  413. defer a.wg.Done()
  414. saveFileTicker := time.NewTicker(dumpAddressInterval)
  415. out:
  416. for {
  417. select {
  418. case <-saveFileTicker.C:
  419. a.saveToFile(a.filePath)
  420. case <-a.Quit():
  421. break out
  422. }
  423. }
  424. saveFileTicker.Stop()
  425. a.saveToFile(a.filePath)
  426. }
  427. //----------------------------------------------------------
  428. func (a *addrBook) getBucket(bucketType byte, bucketIdx int) map[string]*knownAddress {
  429. switch bucketType {
  430. case bucketTypeNew:
  431. return a.bucketsNew[bucketIdx]
  432. case bucketTypeOld:
  433. return a.bucketsOld[bucketIdx]
  434. default:
  435. panic("Invalid bucket type")
  436. }
  437. }
  438. // Adds ka to new bucket. Returns false if it couldn't do it cuz buckets full.
  439. // NOTE: currently it always returns true.
  440. func (a *addrBook) addToNewBucket(ka *knownAddress, bucketIdx int) error {
  441. // Consistency check to ensure we don't add an already known address
  442. if ka.isOld() {
  443. return errAddrBookOldAddressNewBucket{ka.Addr, bucketIdx}
  444. }
  445. addrStr := ka.Addr.String()
  446. bucket := a.getBucket(bucketTypeNew, bucketIdx)
  447. // Already exists?
  448. if _, ok := bucket[addrStr]; ok {
  449. return nil
  450. }
  451. // Enforce max addresses.
  452. if len(bucket) > newBucketSize {
  453. a.Logger.Info("new bucket is full, expiring new")
  454. a.expireNew(bucketIdx)
  455. }
  456. // Add to bucket.
  457. bucket[addrStr] = ka
  458. // increment nNew if the peer doesnt already exist in a bucket
  459. if ka.addBucketRef(bucketIdx) == 1 {
  460. a.nNew++
  461. }
  462. // Add it to addrLookup
  463. a.addrLookup[ka.ID()] = ka
  464. return nil
  465. }
  466. // Adds ka to old bucket. Returns false if it couldn't do it cuz buckets full.
  467. func (a *addrBook) addToOldBucket(ka *knownAddress, bucketIdx int) bool {
  468. // Sanity check
  469. if ka.isNew() {
  470. a.Logger.Error(fmt.Sprintf("Cannot add new address to old bucket: %v", ka))
  471. return false
  472. }
  473. if len(ka.Buckets) != 0 {
  474. a.Logger.Error(fmt.Sprintf("Cannot add already old address to another old bucket: %v", ka))
  475. return false
  476. }
  477. addrStr := ka.Addr.String()
  478. bucket := a.getBucket(bucketTypeOld, bucketIdx)
  479. // Already exists?
  480. if _, ok := bucket[addrStr]; ok {
  481. return true
  482. }
  483. // Enforce max addresses.
  484. if len(bucket) > oldBucketSize {
  485. return false
  486. }
  487. // Add to bucket.
  488. bucket[addrStr] = ka
  489. if ka.addBucketRef(bucketIdx) == 1 {
  490. a.nOld++
  491. }
  492. // Ensure in addrLookup
  493. a.addrLookup[ka.ID()] = ka
  494. return true
  495. }
  496. func (a *addrBook) removeFromBucket(ka *knownAddress, bucketType byte, bucketIdx int) {
  497. if ka.BucketType != bucketType {
  498. a.Logger.Error(fmt.Sprintf("Bucket type mismatch: %v", ka))
  499. return
  500. }
  501. bucket := a.getBucket(bucketType, bucketIdx)
  502. delete(bucket, ka.Addr.String())
  503. if ka.removeBucketRef(bucketIdx) == 0 {
  504. if bucketType == bucketTypeNew {
  505. a.nNew--
  506. } else {
  507. a.nOld--
  508. }
  509. delete(a.addrLookup, ka.ID())
  510. }
  511. }
  512. func (a *addrBook) removeFromAllBuckets(ka *knownAddress) {
  513. for _, bucketIdx := range ka.Buckets {
  514. bucket := a.getBucket(ka.BucketType, bucketIdx)
  515. delete(bucket, ka.Addr.String())
  516. }
  517. ka.Buckets = nil
  518. if ka.BucketType == bucketTypeNew {
  519. a.nNew--
  520. } else {
  521. a.nOld--
  522. }
  523. delete(a.addrLookup, ka.ID())
  524. }
  525. //----------------------------------------------------------
  526. func (a *addrBook) pickOldest(bucketType byte, bucketIdx int) *knownAddress {
  527. bucket := a.getBucket(bucketType, bucketIdx)
  528. var oldest *knownAddress
  529. for _, ka := range bucket {
  530. if oldest == nil || ka.LastAttempt.Before(oldest.LastAttempt) {
  531. oldest = ka
  532. }
  533. }
  534. return oldest
  535. }
  536. // adds the address to a "new" bucket. if its already in one,
  537. // it only adds it probabilistically
  538. func (a *addrBook) addAddress(addr, src *p2p.NetAddress) error {
  539. if addr == nil || src == nil {
  540. return ErrAddrBookNilAddr{addr, src}
  541. }
  542. if err := addr.Valid(); err != nil {
  543. return ErrAddrBookInvalidAddr{Addr: addr, AddrErr: err}
  544. }
  545. if _, ok := a.badPeers[addr.ID]; ok {
  546. return ErrAddressBanned{addr}
  547. }
  548. if _, ok := a.privateIDs[addr.ID]; ok {
  549. return ErrAddrBookPrivate{addr}
  550. }
  551. if _, ok := a.privateIDs[src.ID]; ok {
  552. return ErrAddrBookPrivateSrc{src}
  553. }
  554. // TODO: we should track ourAddrs by ID and by IP:PORT and refuse both.
  555. if _, ok := a.ourAddrs[addr.String()]; ok {
  556. return ErrAddrBookSelf{addr}
  557. }
  558. if a.routabilityStrict && !addr.Routable() {
  559. return ErrAddrBookNonRoutable{addr}
  560. }
  561. ka := a.addrLookup[addr.ID]
  562. if ka != nil {
  563. // If its already old and the address ID's are the same, ignore it.
  564. // Thereby avoiding issues with a node on the network attempting to change
  565. // the IP of a known node ID. (Which could yield an eclipse attack on the node)
  566. if ka.isOld() && ka.Addr.ID == addr.ID {
  567. return nil
  568. }
  569. // Already in max new buckets.
  570. if len(ka.Buckets) == maxNewBucketsPerAddress {
  571. return nil
  572. }
  573. // The more entries we have, the less likely we are to add more.
  574. factor := int32(2 * len(ka.Buckets))
  575. // nolint:gosec // G404: Use of weak random number generator
  576. if mrand.Int31n(factor) != 0 {
  577. return nil
  578. }
  579. } else {
  580. ka = newKnownAddress(addr, src)
  581. }
  582. bucket, err := a.calcNewBucket(addr, src)
  583. if err != nil {
  584. return err
  585. }
  586. return a.addToNewBucket(ka, bucket)
  587. }
  588. func (a *addrBook) randomPickAddresses(bucketType byte, num int) []*p2p.NetAddress {
  589. var buckets []map[string]*knownAddress
  590. switch bucketType {
  591. case bucketTypeNew:
  592. buckets = a.bucketsNew
  593. case bucketTypeOld:
  594. buckets = a.bucketsOld
  595. default:
  596. panic("unexpected bucketType")
  597. }
  598. total := 0
  599. for _, bucket := range buckets {
  600. total += len(bucket)
  601. }
  602. addresses := make([]*knownAddress, 0, total)
  603. for _, bucket := range buckets {
  604. for _, ka := range bucket {
  605. addresses = append(addresses, ka)
  606. }
  607. }
  608. selection := make([]*p2p.NetAddress, 0, num)
  609. chosenSet := make(map[string]bool, num)
  610. rand := tmrand.NewRand()
  611. rand.Shuffle(total, func(i, j int) {
  612. addresses[i], addresses[j] = addresses[j], addresses[i]
  613. })
  614. for _, addr := range addresses {
  615. if chosenSet[addr.Addr.String()] {
  616. continue
  617. }
  618. chosenSet[addr.Addr.String()] = true
  619. selection = append(selection, addr.Addr)
  620. if len(selection) >= num {
  621. return selection
  622. }
  623. }
  624. return selection
  625. }
  626. // Make space in the new buckets by expiring the really bad entries.
  627. // If no bad entries are available we remove the oldest.
  628. func (a *addrBook) expireNew(bucketIdx int) {
  629. for addrStr, ka := range a.bucketsNew[bucketIdx] {
  630. // If an entry is bad, throw it away
  631. if ka.isBad() {
  632. a.Logger.Info(fmt.Sprintf("expiring bad address %v", addrStr))
  633. a.removeFromBucket(ka, bucketTypeNew, bucketIdx)
  634. return
  635. }
  636. }
  637. // If we haven't thrown out a bad entry, throw out the oldest entry
  638. oldest := a.pickOldest(bucketTypeNew, bucketIdx)
  639. a.removeFromBucket(oldest, bucketTypeNew, bucketIdx)
  640. }
  641. // Promotes an address from new to old. If the destination bucket is full,
  642. // demote the oldest one to a "new" bucket.
  643. // TODO: Demote more probabilistically?
  644. func (a *addrBook) moveToOld(ka *knownAddress) error {
  645. // Sanity check
  646. if ka.isOld() {
  647. a.Logger.Error(fmt.Sprintf("Cannot promote address that is already old %v", ka))
  648. return nil
  649. }
  650. if len(ka.Buckets) == 0 {
  651. a.Logger.Error(fmt.Sprintf("Cannot promote address that isn't in any new buckets %v", ka))
  652. return nil
  653. }
  654. // Remove from all (new) buckets.
  655. a.removeFromAllBuckets(ka)
  656. // It's officially old now.
  657. ka.BucketType = bucketTypeOld
  658. // Try to add it to its oldBucket destination.
  659. oldBucketIdx, err := a.calcOldBucket(ka.Addr)
  660. if err != nil {
  661. return err
  662. }
  663. added := a.addToOldBucket(ka, oldBucketIdx)
  664. if !added {
  665. // No room; move the oldest to a new bucket
  666. oldest := a.pickOldest(bucketTypeOld, oldBucketIdx)
  667. a.removeFromBucket(oldest, bucketTypeOld, oldBucketIdx)
  668. newBucketIdx, err := a.calcNewBucket(oldest.Addr, oldest.Src)
  669. if err != nil {
  670. return err
  671. }
  672. if err := a.addToNewBucket(oldest, newBucketIdx); err != nil {
  673. a.Logger.Error("Error adding peer to old bucket", "err", err)
  674. }
  675. // Finally, add our ka to old bucket again.
  676. added = a.addToOldBucket(ka, oldBucketIdx)
  677. if !added {
  678. a.Logger.Error(fmt.Sprintf("Could not re-add ka %v to oldBucketIdx %v", ka, oldBucketIdx))
  679. }
  680. }
  681. return nil
  682. }
  683. func (a *addrBook) removeAddress(addr *p2p.NetAddress) {
  684. ka := a.addrLookup[addr.ID]
  685. if ka == nil {
  686. return
  687. }
  688. a.Logger.Info("Remove address from book", "addr", addr)
  689. a.removeFromAllBuckets(ka)
  690. }
  691. func (a *addrBook) addBadPeer(addr *p2p.NetAddress, banTime time.Duration) bool {
  692. // check it exists in addrbook
  693. ka := a.addrLookup[addr.ID]
  694. // check address is not already there
  695. if ka == nil {
  696. return false
  697. }
  698. if _, alreadyBadPeer := a.badPeers[addr.ID]; !alreadyBadPeer {
  699. // add to bad peer list
  700. ka.ban(banTime)
  701. a.badPeers[addr.ID] = ka
  702. a.Logger.Info("Add address to blacklist", "addr", addr)
  703. }
  704. return true
  705. }
  706. //---------------------------------------------------------------------
  707. // calculate bucket placements
  708. // hash(key + sourcegroup + int64(hash(key + group + sourcegroup)) % bucket_per_group) % num_new_buckets
  709. func (a *addrBook) calcNewBucket(addr, src *p2p.NetAddress) (int, error) {
  710. data1 := []byte{}
  711. data1 = append(data1, []byte(a.key)...)
  712. data1 = append(data1, []byte(a.groupKey(addr))...)
  713. data1 = append(data1, []byte(a.groupKey(src))...)
  714. hash1, err := a.hash(data1)
  715. if err != nil {
  716. return 0, err
  717. }
  718. hash64 := binary.BigEndian.Uint64(hash1)
  719. hash64 %= newBucketsPerGroup
  720. var hashbuf [8]byte
  721. binary.BigEndian.PutUint64(hashbuf[:], hash64)
  722. data2 := []byte{}
  723. data2 = append(data2, []byte(a.key)...)
  724. data2 = append(data2, a.groupKey(src)...)
  725. data2 = append(data2, hashbuf[:]...)
  726. hash2, err := a.hash(data2)
  727. if err != nil {
  728. return 0, err
  729. }
  730. result := int(binary.BigEndian.Uint64(hash2) % newBucketCount)
  731. return result, nil
  732. }
  733. // hash(key + group + int64(hash(key + addr)) % buckets_per_group) % num_old_buckets
  734. func (a *addrBook) calcOldBucket(addr *p2p.NetAddress) (int, error) {
  735. data1 := []byte{}
  736. data1 = append(data1, []byte(a.key)...)
  737. data1 = append(data1, []byte(addr.String())...)
  738. hash1, err := a.hash(data1)
  739. if err != nil {
  740. return 0, err
  741. }
  742. hash64 := binary.BigEndian.Uint64(hash1)
  743. hash64 %= oldBucketsPerGroup
  744. var hashbuf [8]byte
  745. binary.BigEndian.PutUint64(hashbuf[:], hash64)
  746. data2 := []byte{}
  747. data2 = append(data2, []byte(a.key)...)
  748. data2 = append(data2, a.groupKey(addr)...)
  749. data2 = append(data2, hashbuf[:]...)
  750. hash2, err := a.hash(data2)
  751. if err != nil {
  752. return 0, err
  753. }
  754. result := int(binary.BigEndian.Uint64(hash2) % oldBucketCount)
  755. return result, nil
  756. }
  757. // Return a string representing the network group of this address.
  758. // This is the /16 for IPv4 (e.g. 1.2.0.0), the /32 (/36 for he.net) for IPv6, the string
  759. // "local" for a local address and the string "unroutable" for an unroutable
  760. // address.
  761. func (a *addrBook) groupKey(na *p2p.NetAddress) string {
  762. return groupKeyFor(na, a.routabilityStrict)
  763. }
  764. func groupKeyFor(na *p2p.NetAddress, routabilityStrict bool) string {
  765. if routabilityStrict && na.Local() {
  766. return "local"
  767. }
  768. if routabilityStrict && !na.Routable() {
  769. return "unroutable"
  770. }
  771. if ipv4 := na.IP.To4(); ipv4 != nil {
  772. return na.IP.Mask(net.CIDRMask(16, 32)).String()
  773. }
  774. if na.RFC6145() || na.RFC6052() {
  775. // last four bytes are the ip address
  776. ip := na.IP[12:16]
  777. return ip.Mask(net.CIDRMask(16, 32)).String()
  778. }
  779. if na.RFC3964() {
  780. ip := na.IP[2:6]
  781. return ip.Mask(net.CIDRMask(16, 32)).String()
  782. }
  783. if na.RFC4380() {
  784. // teredo tunnels have the last 4 bytes as the v4 address XOR
  785. // 0xff.
  786. ip := net.IP(make([]byte, 4))
  787. for i, byte := range na.IP[12:16] {
  788. ip[i] = byte ^ 0xff
  789. }
  790. return ip.Mask(net.CIDRMask(16, 32)).String()
  791. }
  792. if na.OnionCatTor() {
  793. // group is keyed off the first 4 bits of the actual onion key.
  794. return fmt.Sprintf("tor:%d", na.IP[6]&((1<<4)-1))
  795. }
  796. // OK, so now we know ourselves to be a IPv6 address.
  797. // bitcoind uses /32 for everything, except for Hurricane Electric's
  798. // (he.net) IP range, which it uses /36 for.
  799. bits := 32
  800. heNet := &net.IPNet{IP: net.ParseIP("2001:470::"), Mask: net.CIDRMask(32, 128)}
  801. if heNet.Contains(na.IP) {
  802. bits = 36
  803. }
  804. ipv6Mask := net.CIDRMask(bits, 128)
  805. return na.IP.Mask(ipv6Mask).String()
  806. }
  807. func (a *addrBook) hash(b []byte) ([]byte, error) {
  808. a.hasher.Reset()
  809. a.hasher.Write(b)
  810. return a.hasher.Sum(nil), nil
  811. }