order |
---|
false |
November 2019
Over the next few weeks, @brapse, @marbar3778 and I (@tessr) are having a series of meetings to go over the architecture of Tendermint Core. These are my notes from these meetings, which will either serve as an artifact for onboarding future engineers; or will provide the basis for such a document.
There are three forms of communication (e.g., requests, responses, connections) that can happen in Tendermint Core: internode communication, intranode communication, and client communication.
Internode communication: Happens between a node and other peers. This kind of communication happens over TCP or HTTP. More on this below.
Intranode communication: Happens within the node itself (i.e., between reactors or other components). These are typically function or method calls, or occasionally happen through an event bus.
Client communication: Happens between a client (like a wallet or a browser) and a node on the network.
Internode communication can happen in two ways:
When writing a p2p service, there are two primary responsibilities:
The first responsibility is handled by the Switch:
setSwitch
TODO: More information (maybe) on the implementation of the Switch.
The second responsibility is handled by a combination of the PEX and the Address Book.
TODO: What is the PEX and the Address Book?
mconnection
Here are some relevant facts about TCP:
In order to have performant TCP connections under the conditions created in Tendermint, we've created the mconnection
, or the multiplexing connection. It is our own protocol built on top of TCP. It lets us reuse TCP connections to minimize overhead, and it keeps the window size high by sending auxiliary messages when necessary.
The mconnection
is represented by a struct, which contains a batch of messages, read and write buffers, and a map of channel IDs to reactors. It communicates with TCP via file descriptors, which it can write to. There is one mconnection
per peer connection.
The mconnection
has two methods: send
, which takes a raw handle to the socket and writes to it; and trySend
, which writes to a different buffer. (TODO: which buffer?)
The mconnection
is owned by a peer, which is owned (potentially with many other peers) by a (global) transport, which is owned by the (global) switch:
switch
transport
peer
mconnection
peer
mconnection
peer
mconnection
node.go is the entrypoint for running a node. It sets up reactors, sets up the switch, and registers all the RPC endpoints for a node.
TODO: Flesh out the differences between the types of nodes and how they're configured.
Here are some Reactor Facts:
SetSwitch()
)addReactor()
)addReactor
is called by the switch; addReactor
calls setSwitch
for that reactorFurthermore, all reactors expose:
receive
methodaddReactor
callThe receive
method can be called many times by the mconnection. It has the same signature across all reactors.
The addReactor
call does a for loop over all the channels on the reactor and creates a map of channel IDs->reactors. The switch holds onto this map, and passes it to the transport, a thin wrapper around TCP connections.
The following is an exhaustive (?) list of reactors:
Each of these will be discussed in more detail later.
The blockchain reactor has two responsibilities: