Browse Source

docs: convert from md to rst

pull/654/head
Zach Ramsay 7 years ago
parent
commit
d4ccc88676
8 changed files with 794 additions and 592 deletions
  1. +0
    -229
      docs/mintnet-kubernetes.md
  2. +253
    -0
      docs/mintnet-kubernetes.rst
  3. +0
    -66
      docs/terraform-digitalocean.md
  4. +119
    -0
      docs/terraform-digitalocean.rst
  5. +0
    -202
      docs/using-ansible.md
  6. +294
    -0
      docs/using-ansible.rst
  7. +0
    -95
      docs/using-docker.md
  8. +128
    -0
      docs/using-docker.rst

+ 0
- 229
docs/mintnet-kubernetes.md View File

@ -1,229 +0,0 @@
# Tendermint network powered by Kubernetes
![Tendermint plus Kubernetes](img/t_plus_k.png)
* [QuickStart (MacOS)](#quickstart-macos)
* [QuickStart (Linux)](#quickstart-linux)
* [Usage](#usage)
* [Security](#security)
* [Fault tolerance](#fault-tolerance)
* [Starting process](#starting-process)
This should primarily be used for testing purposes or for tightly-defined
chains operated by a single stakeholder (see [the security
precautions](#security)). If your desire is to launch an application with many
stakeholders, consider using our set of Ansible scripts.
## QuickStart (MacOS)
[Requirements](https://github.com/kubernetes/minikube#requirements)
```
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/kubectl
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
minikube start
git clone https://github.com/tendermint/tools.git && cd tools/mintnet-kubernetes/examples/basecoin && make create
```
## QuickStart (Linux)
[Requirements](https://github.com/kubernetes/minikube#requirements)
```
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/kubectl
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
minikube start
git clone https://github.com/tendermint/tools.git && cd tools/mintnet-kubernetes/examples/basecoin && make create
```
### Verify everything works
**Using a shell:**
1. wait until all the pods are `Running`.
```
kubectl get pods -w -o wide -L tm
```
2. query the Tendermint app logs from the first pod.
```
kubectl logs -c tm -f tm-0
```
3. use [Rest API](https://tendermint.com/docs/internals/rpc) to fetch the
status of the second pod's Tendermint app. Note we are using `kubectl exec`
because pods are not exposed (and should not be) to the outer network.
```
kubectl exec -c tm tm-0 -- curl -s http://tm-1.basecoin:46657/status | json_pp
```
**Using the dashboard:**
```
minikube dashboard
```
### Clean up
```
make destroy
```
## Usage
### (1/4) Setup a Kubernetes cluster
- locally using [Minikube](https://github.com/kubernetes/minikube)
- on GCE with a single click in the web UI
- on AWS using [Kubernetes Operations](https://github.com/kubernetes/kops/blob/master/docs/aws.md)
- on Linux machines (Digital Ocean) using [kubeadm](https://kubernetes.io/docs/getting-started-guides/kubeadm/)
- on AWS, Azure, GCE or bare metal using [Kargo (Ansible)](https://kubernetes.io/docs/getting-started-guides/kargo/)
Please refer to [the official
documentation](https://kubernetes.io/docs/getting-started-guides/) for overview
and comparison of different options. See our guides for [Google Cloud
Engine](docs/SETUP_K8S_ON_GCE.md) or [Digital Ocean](docs/SETUP_K8S_ON_DO.md).
**Make sure you have Kubernetes >= 1.5, because you will be using StatefulSets,
which is a beta feature in 1.5.**
### (2/4) Create a configuration file
Download a template:
```
curl -Lo app.yaml https://github.com/tendermint/tools/raw/master/mintnet-kubernetes/app.template.yaml
```
Open `app.yaml` in your favorite editor and configure your app container
(navigate to `- name: app`). Kubernetes DSL (Domain Specific Language) is very
simple, so it should be easy. You will need to set Docker image, command and/or
run arguments. Replace variables prefixed with `YOUR_APP` with corresponding
values. Set genesis time to now and preferable chain ID in ConfigMap.
Please note if you are changing `replicas` number, do not forget to update
`validators` set in ConfigMap. You will be able to scale the cluster up or down
later, but new pods (nodes) won't become validators automatically.
### (3/4) Deploy your application
```
kubectl create -f ./app.yaml
```
### (4/4) Observe your cluster
**web UI** <-> https://github.com/kubernetes/dashboard
The easiest way to access Dashboard is to use kubectl. Run the following command in your desktop environment:
```
kubectl proxy
```
kubectl will handle authentication with apiserver and make Dashboard available at [http://localhost:8001/ui](http://localhost:8001/ui)
**shell**
List all the pods:
```
kubectl get pods -o wide -L tm
```
StatefulSet details:
```
kubectl describe statefulsets tm
```
First pod details:
```
kubectl describe pod tm-0
```
Tendermint app logs from the first pod:
```
kubectl logs tm-0 -c tm -f
```
App logs from the first pod:
```
kubectl logs tm-0 -c app -f
```
Status of the second pod's Tendermint app:
```
kubectl exec -c tm tm-0 -- curl -s http://tm-1.<YOUR_APP_NAME>:46657/status | json_pp
```
## Security
Due to the nature of Kubernetes, where you typically have a single master, the
master could be a SPOF (Single Point Of Failure). Therefore, you need to make
sure only authorized people can access it. And these people themselves had
taken basic measures in order not to get hacked.
These are the best practices:
- all access to the master is over TLS
- access to the API Server is X.509 certificate or token based
- etcd is not exposed directly to the cluster
- ensure that images are free of vulnerabilities ([1](https://github.com/coreos/clair))
- ensure that only authorized images are used in your environment
- disable direct access to Kubernetes nodes (no SSH)
- define resource quota
Resources:
- https://kubernetes.io/docs/admin/accessing-the-api/
- http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html
- https://blog.openshift.com/securing-kubernetes/
## Fault tolerance
Having a single master (API server) is a bad thing also because if something
happens to it, you risk being left without an access to the application.
To avoid that you can [run Kubernetes in multiple
zones](https://kubernetes.io/docs/admin/multiple-zones/), each zone running an
[API server](https://kubernetes.io/docs/admin/high-availability/) and load
balance requests between them. Do not forget to make sure only one instance of
scheduler and controller-manager are running at once.
Running in multiple zones is a lightweight version of a broader [Cluster
Federation feature](https://kubernetes.io/docs/admin/federation/). Federated
deployments could span across multiple regions (not zones). We haven't tried
this feature yet, so any feedback is highly appreciated! Especially, related to
additional latency and cost of exchanging data between the regions.
Resources:
- https://kubernetes.io/docs/admin/high-availability/
## Starting process
![StatefulSet](img/statefulset.png)
Init containers (`tm-gen-validator`) are run before all other containers,
creating public-private key pair for each pod. Every `tm` container then asks
other pods for their public keys, which are served with nginx (`pub-key`
container). When `tm` container have all the keys, it forms a genesis file and
starts Tendermint process.
## TODO
- [ ] run tendermint from tmuser
```
securityContext:
fsGroup: 999
```

+ 253
- 0
docs/mintnet-kubernetes.rst View File

@ -0,0 +1,253 @@
Tendermint network powered by Kubernetes
========================================
.. figure:: img/t_plus_k.png
:alt: Tendermint plus Kubernetes
Tendermint plus Kubernetes
- `QuickStart (MacOS) <#quickstart-macos>`__
- `QuickStart (Linux) <#quickstart-linux>`__
- `Usage <#usage>`__
- `Security <#security>`__
- `Fault tolerance <#fault-tolerance>`__
- `Starting process <#starting-process>`__
This should primarily be used for testing purposes or for
tightly-defined chains operated by a single stakeholder (see `the
security precautions <#security>`__). If your desire is to launch an
application with many stakeholders, consider using our set of Ansible
scripts.
QuickStart (MacOS)
------------------
`Requirements <https://github.com/kubernetes/minikube#requirements>`__
::
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/kubectl
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
minikube start
git clone https://github.com/tendermint/tools.git && cd tools/mintnet-kubernetes/examples/basecoin && make create
QuickStart (Linux)
------------------
`Requirements <https://github.com/kubernetes/minikube#requirements>`__
::
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/kubectl
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
minikube start
git clone https://github.com/tendermint/tools.git && cd tools/mintnet-kubernetes/examples/basecoin && make create
Verify everything works
~~~~~~~~~~~~~~~~~~~~~~~
**Using a shell:**
1. wait until all the pods are ``Running``.
``kubectl get pods -w -o wide -L tm``
2. query the Tendermint app logs from the first pod.
``kubectl logs -c tm -f tm-0``
3. use `Rest API <https://tendermint.com/docs/internals/rpc>`__ to fetch
the status of the second pod's Tendermint app. Note we are using
``kubectl exec`` because pods are not exposed (and should not be) to
the outer network.
``kubectl exec -c tm tm-0 -- curl -s http://tm-1.basecoin:46657/status | json_pp``
**Using the dashboard:**
::
minikube dashboard
Clean up
~~~~~~~~
::
make destroy
Usage
-----
(1/4) Setup a Kubernetes cluster
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- locally using `Minikube <https://github.com/kubernetes/minikube>`__
- on GCE with a single click in the web UI
- on AWS using `Kubernetes
Operations <https://github.com/kubernetes/kops/blob/master/docs/aws.md>`__
- on Linux machines (Digital Ocean) using
`kubeadm <https://kubernetes.io/docs/getting-started-guides/kubeadm/>`__
- on AWS, Azure, GCE or bare metal using `Kargo
(Ansible) <https://kubernetes.io/docs/getting-started-guides/kargo/>`__
Please refer to `the official
documentation <https://kubernetes.io/docs/getting-started-guides/>`__
for overview and comparison of different options. See our guides for
`Google Cloud Engine <docs/SETUP_K8S_ON_GCE.md>`__ or `Digital
Ocean <docs/SETUP_K8S_ON_DO.md>`__.
**Make sure you have Kubernetes >= 1.5, because you will be using
StatefulSets, which is a beta feature in 1.5.**
(2/4) Create a configuration file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Download a template:
::
curl -Lo app.yaml https://github.com/tendermint/tools/raw/master/mintnet-kubernetes/app.template.yaml
Open ``app.yaml`` in your favorite editor and configure your app
container (navigate to ``- name: app``). Kubernetes DSL (Domain Specific
Language) is very simple, so it should be easy. You will need to set
Docker image, command and/or run arguments. Replace variables prefixed
with ``YOUR_APP`` with corresponding values. Set genesis time to now and
preferable chain ID in ConfigMap.
Please note if you are changing ``replicas`` number, do not forget to
update ``validators`` set in ConfigMap. You will be able to scale the
cluster up or down later, but new pods (nodes) won't become validators
automatically.
(3/4) Deploy your application
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
kubectl create -f ./app.yaml
(4/4) Observe your cluster
~~~~~~~~~~~~~~~~~~~~~~~~~~
**web UI** <-> https://github.com/kubernetes/dashboard
The easiest way to access Dashboard is to use kubectl. Run the following
command in your desktop environment:
::
kubectl proxy
kubectl will handle authentication with apiserver and make Dashboard
available at http://localhost:8001/ui
**shell**
List all the pods:
::
kubectl get pods -o wide -L tm
StatefulSet details:
::
kubectl describe statefulsets tm
First pod details:
::
kubectl describe pod tm-0
Tendermint app logs from the first pod:
::
kubectl logs tm-0 -c tm -f
App logs from the first pod:
::
kubectl logs tm-0 -c app -f
Status of the second pod's Tendermint app:
::
kubectl exec -c tm tm-0 -- curl -s http://tm-1.<YOUR_APP_NAME>:46657/status | json_pp
Security
--------
Due to the nature of Kubernetes, where you typically have a single
master, the master could be a SPOF (Single Point Of Failure). Therefore,
you need to make sure only authorized people can access it. And these
people themselves had taken basic measures in order not to get hacked.
These are the best practices:
- all access to the master is over TLS
- access to the API Server is X.509 certificate or token based
- etcd is not exposed directly to the cluster
- ensure that images are free of vulnerabilities
(`1 <https://github.com/coreos/clair>`__)
- ensure that only authorized images are used in your environment
- disable direct access to Kubernetes nodes (no SSH)
- define resource quota
Resources:
- https://kubernetes.io/docs/admin/accessing-the-api/
- http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html
- https://blog.openshift.com/securing-kubernetes/
Fault tolerance
---------------
Having a single master (API server) is a bad thing also because if
something happens to it, you risk being left without an access to the
application.
To avoid that you can `run Kubernetes in multiple
zones <https://kubernetes.io/docs/admin/multiple-zones/>`__, each zone
running an `API
server <https://kubernetes.io/docs/admin/high-availability/>`__ and load
balance requests between them. Do not forget to make sure only one
instance of scheduler and controller-manager are running at once.
Running in multiple zones is a lightweight version of a broader `Cluster
Federation feature <https://kubernetes.io/docs/admin/federation/>`__.
Federated deployments could span across multiple regions (not zones). We
haven't tried this feature yet, so any feedback is highly appreciated!
Especially, related to additional latency and cost of exchanging data
between the regions.
Resources:
- https://kubernetes.io/docs/admin/high-availability/
Starting process
----------------
.. figure:: img/statefulset.png
:alt: StatefulSet
StatefulSet
Init containers (``tm-gen-validator``) are run before all other
containers, creating public-private key pair for each pod. Every ``tm``
container then asks other pods for their public keys, which are served
with nginx (``pub-key`` container). When ``tm`` container have all the
keys, it forms a genesis file and starts Tendermint process.
TODO
----
- [ ] run tendermint from tmuser ``securityContext: fsGroup: 999``

+ 0
- 66
docs/terraform-digitalocean.md View File

@ -1,66 +0,0 @@
# Terraform for Digital Ocean
This is a generic [Terraform](https://www.terraform.io/) configuration that sets up DigitalOcean droplets.
# Prerequisites
* Install [HashiCorp Terraform](https://www.terraform.io) on a linux machine.
* Create a [DigitalOcean API token](https://cloud.digitalocean.com/settings/api/tokens) with read and write capability.
* Create a private/public key pair for SSH. This is needed to log onto your droplets as well as by Ansible to connect for configuration changes.
* Set up the public SSH key at the [DigitalOcean security page](https://cloud.digitalocean.com/settings/security). [Here](https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets)'s a tutorial.
* Find out your SSH key ID at DigitalOcean by querying the below command on your linux box:
```
DO_API_TOKEN="<The API token received from DigitalOcean>"
curl -X GET -H "Content-Type: application/json" -H "Authorization: Bearer $DO_API_TOKEN" "https://api.digitalocean.com/v2/account/keys"
```
# How to run
## Initialization
If this is your first time using terraform, you have to initialize it by running the below command. (Note: initialization can be run multiple times)
```
terraform init
```
After initialization it's good measure to create a new Terraform environment for the droplets so they are always managed together.
```
TESTNET_NAME="testnet-servers"
terraform env new "$TESTNET_NAME"
```
Note this `terraform env` command is only available in terraform `v0.9` and up.
## Execution
The below command will create 4 nodes in DigitalOcean. They will be named `testnet-servers-node0` to `testnet-servers-node3` and they will be tagged as `testnet-servers`.
```
DO_API_TOKEN="<The API token received from DigitalOcean>"
SSH_IDS="[ \"<The SSH ID received from the curl call above.>\" ]"
terraform apply -var TESTNET_NAME="testnet-servers" -var servers=4 -var DO_API_TOKEN="$DO_API_TOKEN" -var ssh_keys="$SSH_IDS"
```
Note: `ssh_keys` is a list of strings. You can add multiple keys. For example: `["1234567","9876543"]`.
Alternatively you can use the default settings. The number of default servers is 4 and the testnet name is `tf-testnet1`. Variables can also be defined as environment variables instead of the command-line. Environment variables that start with `TF_VAR_` will be translated into the Terraform configuration. For example the number of servers can be overriden by setting the `TF_VAR_servers` variable.
```
TF_VAR_DO_API_TOKEN="<The API token received from DigitalOcean>"
TF_VAR_TESTNET_NAME="testnet-servers"
terraform-apply
```
## Security
DigitalOcean uses the root user by default on its droplets. This is fine as long as SSH keys are used. However some people still would like to disable root and use an alternative user to connect to the droplets - then `sudo` from there.
Terraform can do this but it requires SSH agent running on the machine where terraform is run, with one of the SSH keys of the droplets added to the agent. (This will be neede for ansible too, so it's worth setting it up here. Check out the [ansible](https://github.com/tendermint/tools/tree/master/ansible) page for more information.)
After setting up the SSH key, run `terraform apply` with `-var noroot=true` to create your droplets. Terraform will create a user called `ec2-user` and move the SSH keys over, this way disabling SSH login for root. It also adds the `ec2-user` to the sudoers file, so after logging in as ec2-user you can `sudo` to `root`.
DigitalOcean announced firewalls but the current version of Terraform (0.9.8 as of this writing) does not support it yet. Fortunately it is quite easy to set it up through the web interface (and not that bad through the [RESTful API](https://developers.digitalocean.com/documentation/v2/#firewalls) either). When adding droplets to a firewall rule, you can add tags. All droplets in a testnet are tagged with the testnet name so it's enough to define the testnet name in the firewall rule. It is not necessary to add the nodes one-by-one. Also, the firewall rule "remembers" the testnet name tag so if you change the servers but keep the name, the firewall rules will still apply.
# What's next
After setting up the nodes, head over to the [ansible folder](https://github.com/tendermint/tools/tree/master/ansible) to set up tendermint and basecoin.

+ 119
- 0
docs/terraform-digitalocean.rst View File

@ -0,0 +1,119 @@
Terraform for Digital Ocean
===========================
This is a generic `Terraform <https://www.terraform.io/>`__
configuration that sets up DigitalOcean droplets.
Prerequisites
=============
- Install `HashiCorp Terraform <https://www.terraform.io>`__ on a linux
machine.
- Create a `DigitalOcean API
token <https://cloud.digitalocean.com/settings/api/tokens>`__ with
read and write capability.
- Create a private/public key pair for SSH. This is needed to log onto
your droplets as well as by Ansible to connect for configuration
changes.
- Set up the public SSH key at the `DigitalOcean security
page <https://cloud.digitalocean.com/settings/security>`__.
`Here <https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets>`__'s
a tutorial.
- Find out your SSH key ID at DigitalOcean by querying the below
command on your linux box:
::
DO_API_TOKEN="<The API token received from DigitalOcean>"
curl -X GET -H "Content-Type: application/json" -H "Authorization: Bearer $DO_API_TOKEN" "https://api.digitalocean.com/v2/account/keys"
How to run
==========
Initialization
--------------
If this is your first time using terraform, you have to initialize it by
running the below command. (Note: initialization can be run multiple
times)
::
terraform init
After initialization it's good measure to create a new Terraform
environment for the droplets so they are always managed together.
::
TESTNET_NAME="testnet-servers"
terraform env new "$TESTNET_NAME"
Note this ``terraform env`` command is only available in terraform
``v0.9`` and up.
Execution
---------
The below command will create 4 nodes in DigitalOcean. They will be
named ``testnet-servers-node0`` to ``testnet-servers-node3`` and they
will be tagged as ``testnet-servers``.
::
DO_API_TOKEN="<The API token received from DigitalOcean>"
SSH_IDS="[ \"<The SSH ID received from the curl call above.>\" ]"
terraform apply -var TESTNET_NAME="testnet-servers" -var servers=4 -var DO_API_TOKEN="$DO_API_TOKEN" -var ssh_keys="$SSH_IDS"
Note: ``ssh_keys`` is a list of strings. You can add multiple keys. For
example: ``["1234567","9876543"]``.
Alternatively you can use the default settings. The number of default
servers is 4 and the testnet name is ``tf-testnet1``. Variables can also
be defined as environment variables instead of the command-line.
Environment variables that start with ``TF_VAR_`` will be translated
into the Terraform configuration. For example the number of servers can
be overriden by setting the ``TF_VAR_servers`` variable.
::
TF_VAR_DO_API_TOKEN="<The API token received from DigitalOcean>"
TF_VAR_TESTNET_NAME="testnet-servers"
terraform-apply
Security
--------
DigitalOcean uses the root user by default on its droplets. This is fine
as long as SSH keys are used. However some people still would like to
disable root and use an alternative user to connect to the droplets -
then ``sudo`` from there. Terraform can do this but it requires SSH
agent running on the machine where terraform is run, with one of the SSH
keys of the droplets added to the agent. (This will be neede for ansible
too, so it's worth setting it up here. Check out the
`ansible <https://github.com/tendermint/tools/tree/master/ansible>`__
page for more information.) After setting up the SSH key, run
``terraform apply`` with ``-var noroot=true`` to create your droplets.
Terraform will create a user called ``ec2-user`` and move the SSH keys
over, this way disabling SSH login for root. It also adds the
``ec2-user`` to the sudoers file, so after logging in as ec2-user you
can ``sudo`` to ``root``.
DigitalOcean announced firewalls but the current version of Terraform
(0.9.8 as of this writing) does not support it yet. Fortunately it is
quite easy to set it up through the web interface (and not that bad
through the `RESTful
API <https://developers.digitalocean.com/documentation/v2/#firewalls>`__
either). When adding droplets to a firewall rule, you can add tags. All
droplets in a testnet are tagged with the testnet name so it's enough to
define the testnet name in the firewall rule. It is not necessary to add
the nodes one-by-one. Also, the firewall rule "remembers" the testnet
name tag so if you change the servers but keep the name, the firewall
rules will still apply.
What's next
===========
After setting up the nodes, head over to the `ansible
folder <https://github.com/tendermint/tools/tree/master/ansible>`__ to
set up tendermint and basecoin.

+ 0
- 202
docs/using-ansible.md View File

@ -1,202 +0,0 @@
# Ansible playbook for Tendermint applications
![Ansible plus Tendermint](img/a_plus_t.png)
* [Prerequisites](#Prerequisites)
* [Ansible setup](#Ansible setup)
* [Running the playbook](#Running the playbook)
The playbooks in this folder run [ansible](http://www.ansible.com/) roles which:
* install and configure basecoin or ethermint
* start/stop basecoin or ethermint and reset their configuration
## Prerequisites
* Ansible 2.0 or higher
* SSH key to the servers
Optional for DigitalOcean droplets:
* DigitalOcean API Token
* python dopy package
Head over to the [Terraform folder](https://github.com/tendermint/tools/tree/master/terraform-digitalocean) for a description on how to get a DigitalOcean API Token.
Optional for Amazon AWS instances:
* Amazon AWS API access key ID and secret access key.
The cloud inventory scripts come from the ansible team at their [GitHub](https://github.com/ansible/ansible) page. You can get the latest version from the contrib/inventory folder.
## Ansible setup
Ansible requires a "command machine" or "local machine" or "orchestrator machine" to run on. This can be your laptop or any machine that can run ansible. (It does not have to be part of the cloud network that hosts your servers.)
Use the official [Ansible installation guide](http://docs.ansible.com/ansible/intro_installation.html) to install Ansible. Here are a few examples on basic installation commands:
Ubuntu/Debian:
```
sudo apt-get install ansible
```
CentOS/RedHat:
```
sudo yum install epel-release
sudo yum install ansible
```
Mac OSX:
If you have (Homebrew)[https://brew.sh] installed, then it's simply
```
brew install ansible
```
If not, you can install it using `pip`:
```
sudo easy_install pip
sudo pip install ansible
```
To make life easier, you can start an SSH Agent and load your SSH key(s). This way ansible will have an uninterrupted way of connecting to your servers.
```
ssh-agent > ~/.ssh/ssh.env
source ~/.ssh/ssh.env
ssh-add private.key
```
Subsequently, as long as the agent is running, you can use `source ~/.ssh/ssh.env` to load the keys to the current session.
Note: On Mac OSX, you can add the `-K` option to ssh-add to store the passphrase in your keychain. The security of this feature is debated but it is convenient.
### Optional cloud dependencies
If you are using a cloud provider to host your servers, you need the below dependencies installed on your local machine.
#### DigitalOcean inventory dependencies:
Ubuntu/Debian:
```
sudo apt-get install python-pip
sudo pip install dopy
```
CentOS/RedHat:
```
sudo yum install python-pip
sudo pip install dopy
```
Mac OSX:
```
sudo pip install dopy
```
#### Amazon AWS inventory dependencies:
Ubuntu/Debian:
```
sudo apt-get install python-boto
```
CentOS/RedHat:
```
sudo yum install python-boto
```
Mac OSX:
```
sudo pip install boto
```
## Refreshing the DigitalOcean inventory
If you just finished creating droplets, the local DigitalOcean inventory cache is not up-to-date. To refresh it, run:
```
DO_API_TOKEN="<The API token received from DigitalOcean>"
python -u inventory/digital_ocean.py --refresh-cache 1> /dev/null
```
## Refreshing the Amazon AWS inventory
If you just finished creating Amazon AWS EC2 instances, the local AWS inventory cache is not up-to-date. To refresh it, run:
```
AWS_ACCESS_KEY_ID='<The API access key ID received from Amazon>'
AWS_SECRET_ACCESS_KEY='<The API secret access key received from Amazon>'
python -u inventory/ec2.py --refresh-cache 1> /dev/null
```
Note: you don't need the access key and secret key set, if you are running ansible on an Amazon AMI instance with the proper IAM permissions set.
## Running the playbooks
The playbooks are locked down to only run if the environment variable `TF_VAR_TESTNET_NAME` is populated. This is a precaution so you don't accidentally run the playbook on all your servers.
The variable `TF_VAR_TESTNET_NAME` contains the testnet name which ansible translates into an ansible group. If you used Terraform to create the servers, it was the testnet name used there.
If the playbook cannot connect to the servers because of public key denial, your SSH Agent is not set up properly. Alternatively you can add the SSH key to ansible using the `--private-key` option.
If you need to connect to the nodes as root but your local username is different, use the ansible option `-u root` to tell ansible to connect to the servers and authenticate as the root user.
If you secured your server and you need to `sudo` for root access, use the the `-b` or `--become` option to tell ansible to sudo to root after connecting to the server. In the Terraform-DigitalOcean example, if you created the ec2-user by adding the `noroot=true` option (or if you are simply on Amazon AWS), you need to add the options `-u ec2-user -b` to ansible to tell it to connect as the ec2-user and then sudo to root to run the playbook.
### DigitalOcean
```
DO_API_TOKEN="<The API token received from DigitalOcean>"
TF_VAR_TESTNET_NAME="testnet-servers"
ansible-playbook -i inventory/digital_ocean.py install.yml -e service=basecoin
```
### Amazon AWS
```
AWS_ACCESS_KEY_ID='<The API access key ID received from Amazon>'
AWS_SECRET_ACCESS_KEY='<The API secret access key received from Amazon>'
TF_VAR_TESTNET_NAME="testnet-servers"
ansible-playbook -i inventory/ec2.py install.yml -e service=basecoin
```
### Installing custom versions
By default ansible installs the tendermint, basecoin or ethermint binary versions from the latest release in the repository. If you build your own version of the binaries, you can tell ansible to install that instead.
```
GOPATH="<your go path>"
go get -u github.com/tendermint/basecoin/cmd/basecoin
DO_API_TOKEN="<The API token received from DigitalOcean>"
TF_VAR_TESTNET_NAME="testnet-servers"
ansible-playbook -i inventory/digital_ocean.py install.yml -e service=basecoin -e release_install=false
```
Alternatively you can change the variable settings in `group_vars/all`.
## Other commands and roles
There are few extra playbooks to make life easier managing your servers.
* install.yml - Install basecoin or ethermint applications. (Tendermint gets installed automatically.) Use the `service` parameter to define which application to install. Defaults to `basecoin`.
* reset.yml - Stop the application, reset the configuration and data, then start the application again. You need to pass `-e service=<servicename>`, like `-e service=basecoin`. It will restart the underlying tendermint application too.
* restart.yml - Restart a service on all nodes. You need to pass `-e service=<servicename>`, like `-e service=basecoin`. It will restart the underlying tendermint application too.
* stop.yml - Stop the application. You need to pass `-e service=<servicename>`.
* status.yml - Check the service status and print it. You need to pass `-e service=<servicename>`.
* start.yml - Start the application. You need to pass `-e service=<servicename>`.
* ubuntu16-patch.yml - Ubuntu 16.04 does not have the minimum required python package installed to be able to run ansible. If you are using ubuntu, run this playbook first on the target machines. This will install the python pacakge that is required for ansible to work correctly on the remote nodes.
* upgrade.yml - Upgrade the `service` on your testnet. It will stop the service and restart it at the end. It will only work if the upgraded version is backward compatible with the installed version.
* upgrade-reset.yml - Upgrade the `service` on your testnet and reset the database. It will stop the service and restart it at the end. It will work for upgrades where the new version is not backward-compatible with the installed version - however it will reset the testnet to its default.
The roles are self-sufficient under the `roles/` folder.
* install - install the application defined in the `service` parameter. It can install release packages and update them with custom-compiled binaries.
* unsafe_reset - delete the database for a service, including the tendermint database.
* config - configure the application defined in `service`. It also configures the underlying tendermint service. Check `group_vars/all` for options.
* stop - stop an application. Requires the `service` parameter set.
* status - check the status of an application. Requires the `service` parameter set.
* start - start an application. Requires the `service` parameter set.
## Default variables
Default variables are documented under `group_vars/all`. You can the parameters there to deploy a previously created genesis.json file (instead of dynamically creating it) or if you want to deploy custom built binaries instead of deploying a released version.

+ 294
- 0
docs/using-ansible.rst View File

@ -0,0 +1,294 @@
Ansible playbook for Tendermint applications
============================================
.. figure:: img/a_plus_t.png
:alt: Ansible plus Tendermint
Ansible plus Tendermint
- `Prerequisites <#Prerequisites>`__
- `Ansible setup <#Ansible%20setup>`__
- `Running the playbook <#Running%20the%20playbook>`__
The playbooks in this folder run `ansible <http://www.ansible.com/>`__
roles which:
- install and configure basecoin or ethermint
- start/stop basecoin or ethermint and reset their configuration
Prerequisites
-------------
- Ansible 2.0 or higher
- SSH key to the servers
Optional for DigitalOcean droplets: \* DigitalOcean API Token \* python
dopy package
Head over to the `Terraform
folder <https://github.com/tendermint/tools/tree/master/terraform-digitalocean>`__
for a description on how to get a DigitalOcean API Token.
Optional for Amazon AWS instances: \* Amazon AWS API access key ID and
secret access key.
The cloud inventory scripts come from the ansible team at their
`GitHub <https://github.com/ansible/ansible>`__ page. You can get the
latest version from the contrib/inventory folder.
Ansible setup
-------------
Ansible requires a "command machine" or "local machine" or "orchestrator
machine" to run on. This can be your laptop or any machine that can run
ansible. (It does not have to be part of the cloud network that hosts
your servers.)
Use the official `Ansible installation
guide <http://docs.ansible.com/ansible/intro_installation.html>`__ to
install Ansible. Here are a few examples on basic installation commands:
Ubuntu/Debian:
::
sudo apt-get install ansible
CentOS/RedHat:
::
sudo yum install epel-release
sudo yum install ansible
Mac OSX: If you have (Homebrew)[https://brew.sh] installed, then it's
simply
::
brew install ansible
If not, you can install it using ``pip``:
::
sudo easy_install pip
sudo pip install ansible
To make life easier, you can start an SSH Agent and load your SSH
key(s). This way ansible will have an uninterrupted way of connecting to
your servers.
::
ssh-agent > ~/.ssh/ssh.env
source ~/.ssh/ssh.env
ssh-add private.key
Subsequently, as long as the agent is running, you can use
``source ~/.ssh/ssh.env`` to load the keys to the current session. Note:
On Mac OSX, you can add the ``-K`` option to ssh-add to store the
passphrase in your keychain. The security of this feature is debated but
it is convenient.
Optional cloud dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you are using a cloud provider to host your servers, you need the
below dependencies installed on your local machine.
DigitalOcean inventory dependencies:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Ubuntu/Debian:
::
sudo apt-get install python-pip
sudo pip install dopy
CentOS/RedHat:
::
sudo yum install python-pip
sudo pip install dopy
Mac OSX:
::
sudo pip install dopy
Amazon AWS inventory dependencies:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Ubuntu/Debian:
::
sudo apt-get install python-boto
CentOS/RedHat:
::
sudo yum install python-boto
Mac OSX:
::
sudo pip install boto
Refreshing the DigitalOcean inventory
-------------------------------------
If you just finished creating droplets, the local DigitalOcean inventory
cache is not up-to-date. To refresh it, run:
::
DO_API_TOKEN="<The API token received from DigitalOcean>"
python -u inventory/digital_ocean.py --refresh-cache 1> /dev/null
Refreshing the Amazon AWS inventory
-----------------------------------
If you just finished creating Amazon AWS EC2 instances, the local AWS
inventory cache is not up-to-date. To refresh it, run:
::
AWS_ACCESS_KEY_ID='<The API access key ID received from Amazon>'
AWS_SECRET_ACCESS_KEY='<The API secret access key received from Amazon>'
python -u inventory/ec2.py --refresh-cache 1> /dev/null
Note: you don't need the access key and secret key set, if you are
running ansible on an Amazon AMI instance with the proper IAM
permissions set.
Running the playbooks
---------------------
The playbooks are locked down to only run if the environment variable
``TF_VAR_TESTNET_NAME`` is populated. This is a precaution so you don't
accidentally run the playbook on all your servers.
The variable ``TF_VAR_TESTNET_NAME`` contains the testnet name which
ansible translates into an ansible group. If you used Terraform to
create the servers, it was the testnet name used there.
If the playbook cannot connect to the servers because of public key
denial, your SSH Agent is not set up properly. Alternatively you can add
the SSH key to ansible using the ``--private-key`` option.
If you need to connect to the nodes as root but your local username is
different, use the ansible option ``-u root`` to tell ansible to connect
to the servers and authenticate as the root user.
If you secured your server and you need to ``sudo`` for root access, use
the the ``-b`` or ``--become`` option to tell ansible to sudo to root
after connecting to the server. In the Terraform-DigitalOcean example,
if you created the ec2-user by adding the ``noroot=true`` option (or if
you are simply on Amazon AWS), you need to add the options
``-u ec2-user -b`` to ansible to tell it to connect as the ec2-user and
then sudo to root to run the playbook.
DigitalOcean
~~~~~~~~~~~~
::
DO_API_TOKEN="<The API token received from DigitalOcean>"
TF_VAR_TESTNET_NAME="testnet-servers"
ansible-playbook -i inventory/digital_ocean.py install.yml -e service=basecoin
Amazon AWS
~~~~~~~~~~
::
AWS_ACCESS_KEY_ID='<The API access key ID received from Amazon>'
AWS_SECRET_ACCESS_KEY='<The API secret access key received from Amazon>'
TF_VAR_TESTNET_NAME="testnet-servers"
ansible-playbook -i inventory/ec2.py install.yml -e service=basecoin
Installing custom versions
~~~~~~~~~~~~~~~~~~~~~~~~~~
By default ansible installs the tendermint, basecoin or ethermint binary
versions from the latest release in the repository. If you build your
own version of the binaries, you can tell ansible to install that
instead.
::
GOPATH="<your go path>"
go get -u github.com/tendermint/basecoin/cmd/basecoin
DO_API_TOKEN="<The API token received from DigitalOcean>"
TF_VAR_TESTNET_NAME="testnet-servers"
ansible-playbook -i inventory/digital_ocean.py install.yml -e service=basecoin -e release_install=false
Alternatively you can change the variable settings in
``group_vars/all``.
Other commands and roles
------------------------
There are few extra playbooks to make life easier managing your servers.
- install.yml - Install basecoin or ethermint applications. (Tendermint
gets installed automatically.) Use the ``service`` parameter to
define which application to install. Defaults to ``basecoin``.
- reset.yml - Stop the application, reset the configuration and data,
then start the application again. You need to pass
``-e service=<servicename>``, like ``-e service=basecoin``. It will
restart the underlying tendermint application too.
- restart.yml - Restart a service on all nodes. You need to pass
``-e service=<servicename>``, like ``-e service=basecoin``. It will
restart the underlying tendermint application too.
- stop.yml - Stop the application. You need to pass
``-e service=<servicename>``.
- status.yml - Check the service status and print it. You need to pass
``-e service=<servicename>``.
- start.yml - Start the application. You need to pass
``-e service=<servicename>``.
- ubuntu16-patch.yml - Ubuntu 16.04 does not have the minimum required
python package installed to be able to run ansible. If you are using
ubuntu, run this playbook first on the target machines. This will
install the python pacakge that is required for ansible to work
correctly on the remote nodes.
- upgrade.yml - Upgrade the ``service`` on your testnet. It will stop
the service and restart it at the end. It will only work if the
upgraded version is backward compatible with the installed version.
- upgrade-reset.yml - Upgrade the ``service`` on your testnet and reset
the database. It will stop the service and restart it at the end. It
will work for upgrades where the new version is not
backward-compatible with the installed version - however it will
reset the testnet to its default.
The roles are self-sufficient under the ``roles/`` folder.
- install - install the application defined in the ``service``
parameter. It can install release packages and update them with
custom-compiled binaries.
- unsafe\_reset - delete the database for a service, including the
tendermint database.
- config - configure the application defined in ``service``. It also
configures the underlying tendermint service. Check
``group_vars/all`` for options.
- stop - stop an application. Requires the ``service`` parameter set.
- status - check the status of an application. Requires the ``service``
parameter set.
- start - start an application. Requires the ``service`` parameter set.
Default variables
-----------------
Default variables are documented under ``group_vars/all``. You can the
parameters there to deploy a previously created genesis.json file
(instead of dynamically creating it) or if you want to deploy custom
built binaries instead of deploying a released version.

+ 0
- 95
docs/using-docker.md View File

@ -1,95 +0,0 @@
# Docker container description for Tendermint applications
* Overview (#Overview)
* Tendermint (#Tendermint)
* Basecoin (#Basecoin)
* Ethermint (#Ethermint)
## Overview
This folder contains Docker container descriptions. Using this folder you can build your own Docker images with the tendermint application.
It is assumed that you set up docker already.
If you don't want to build the images yourself, you should be able to download them from Docker Hub.
## Tendermint
Build the container:
Copy the `tendermint` binary to the `tendermint` folder.
```
docker build -t tendermint tendermint
```
The application configuration will be stored at `/tendermint` in the container. The ports 46656 and 46657 will be open for ABCI applications to connect.
Initialize tendermint configuration and keep it after the container is finished in a docker volume called `data`:
```
docker run --rm -v data:/tendermint tendermint init
```
If you want the docker volume to be a physical directory on your filesystem, you have to give an absolute path to docker and make sure the permissions allow the application to write it.
Get the public key of tendermint:
```
docker run --rm -v data:/tendermint tendermint show_validator
```
Run the docker tendermint application with:
```
docker run --rm -d -v data:/tendermint tendermint node
```
## Basecoin
Build the container:
Copy the `basecoin` binary to the `basecoin` folder.
```
docker build -t basecoin basecoin
```
The application configuration will be stored at `/basecoin`.
Initialize basecoin configuration and keep it after the container is finished:
```
docker run --rm -v basecoindata:/basecoin basecoin init deadbeef
```
Use your own basecoin account instead of `deadbeef` in the `init` command.
Get the public key of basecoin:
We use a trick here: since the basecoin and the tendermint configuration folders are similar, the `tendermint` command can extract the public key for us if we feed the basecoin configuration folder to tendermint.
```
docker run --rm -v basecoindata:/tendermint tendermint show_validator
```
Run the docker tendermint application with:
This is a two-step process:
* Run the basecoin container.
* Run the tendermint container and expose the ports that allow clients to connect. The --proxy_app should contain the basecoin application's IP address and port.
```
docker run --rm -d -v basecoindata:/basecoin basecoin start --without-tendermint
docker run --rm -d -v data:/tendermint -p 46656-46657:46656-46657 tendermint node --proxy_app tcp://172.17.0.2:46658
```
## Ethermint
Build the container:
Copy the `ethermint` binary and the setup folder to the `ethermint` folder.
```
docker build -t ethermint ethermint
```
The application configuration will be stored at `/ethermint`.
The files required for initializing ethermint (the files in the source `setup` folder) are under `/setup`.
Initialize ethermint configuration:
```
docker run --rm -v ethermintdata:/ethermint ethermint init /setup/genesis.json
```
Start ethermint as a validator node:
This is a two-step process:
* Run the ethermint container. You will have to define where tendermint runs as the ethermint binary connects to it explicitly.
* Run the tendermint container and expose the ports that allow clients to connect. The --proxy_app should contain the ethermint application's IP address and port.
```
docker run --rm -d -v ethermintdata:/ethermint ethermint --tendermint_addr tcp://172.17.0.3:46657
docker run --rm -d -v data:/tendermint -p 46656-46657:46656-46657 tendermint node --proxy_app tcp://172.17.0.2:46658
```

+ 128
- 0
docs/using-docker.rst View File

@ -0,0 +1,128 @@
Docker container description for Tendermint applications
========================================================
- Overview (#Overview)
- Tendermint (#Tendermint)
- Basecoin (#Basecoin)
- Ethermint (#Ethermint)
Overview
--------
This folder contains Docker container descriptions. Using this folder
you can build your own Docker images with the tendermint application.
It is assumed that you set up docker already.
If you don't want to build the images yourself, you should be able to
download them from Docker Hub.
Tendermint
----------
Build the container: Copy the ``tendermint`` binary to the
``tendermint`` folder.
::
docker build -t tendermint tendermint
The application configuration will be stored at ``/tendermint`` in the
container. The ports 46656 and 46657 will be open for ABCI applications
to connect.
Initialize tendermint configuration and keep it after the container is
finished in a docker volume called ``data``:
::
docker run --rm -v data:/tendermint tendermint init
If you want the docker volume to be a physical directory on your
filesystem, you have to give an absolute path to docker and make sure
the permissions allow the application to write it.
Get the public key of tendermint:
::
docker run --rm -v data:/tendermint tendermint show_validator
Run the docker tendermint application with:
::
docker run --rm -d -v data:/tendermint tendermint node
Basecoin
--------
Build the container: Copy the ``basecoin`` binary to the ``basecoin``
folder.
::
docker build -t basecoin basecoin
The application configuration will be stored at ``/basecoin``.
Initialize basecoin configuration and keep it after the container is
finished:
::
docker run --rm -v basecoindata:/basecoin basecoin init deadbeef
Use your own basecoin account instead of ``deadbeef`` in the ``init``
command.
Get the public key of basecoin: We use a trick here: since the basecoin
and the tendermint configuration folders are similar, the ``tendermint``
command can extract the public key for us if we feed the basecoin
configuration folder to tendermint.
::
docker run --rm -v basecoindata:/tendermint tendermint show_validator
Run the docker tendermint application with: This is a two-step process:
\* Run the basecoin container. \* Run the tendermint container and
expose the ports that allow clients to connect. The --proxy\_app should
contain the basecoin application's IP address and port.
::
docker run --rm -d -v basecoindata:/basecoin basecoin start --without-tendermint
docker run --rm -d -v data:/tendermint -p 46656-46657:46656-46657 tendermint node --proxy_app tcp://172.17.0.2:46658
Ethermint
---------
Build the container: Copy the ``ethermint`` binary and the setup folder
to the ``ethermint`` folder.
::
docker build -t ethermint ethermint
The application configuration will be stored at ``/ethermint``. The
files required for initializing ethermint (the files in the source
``setup`` folder) are under ``/setup``.
Initialize ethermint configuration:
::
docker run --rm -v ethermintdata:/ethermint ethermint init /setup/genesis.json
Start ethermint as a validator node: This is a two-step process: \* Run
the ethermint container. You will have to define where tendermint runs
as the ethermint binary connects to it explicitly. \* Run the tendermint
container and expose the ports that allow clients to connect. The
--proxy\_app should contain the ethermint application's IP address and
port.
::
docker run --rm -d -v ethermintdata:/ethermint ethermint --tendermint_addr tcp://172.17.0.3:46657
docker run --rm -d -v data:/tendermint -p 46656-46657:46656-46657 tendermint node --proxy_app tcp://172.17.0.2:46658

Loading…
Cancel
Save