Static deployment of 3-nodes cluster
Static deployment of 3-nodes cluster
Goals
Alternative deployment method to Helm Charts.
Requirements
In some scenarios it may not be possible to deploy Monasca on a Kubernetes cluster. Therefore following high-level requirements have been defined:
- Provide easy deployment method for Monasca on 3 nodes cluster
- No flexibility required
- No cluster management layer should be used (K8s, swarm)
Overview
Docker Compose offers the possibility of using multiple compose
files for
customizing and extending the application. The configuration of the static
cluster can be achieved by providing additional Compose files for individual
nodes. Central docker-compose.yaml file can be then run together with cluster
Compose files and the configuration will be replaced or added. Cluster
configuration should include for example:
- zookeeper.hosts
- kafka.hosts
- kafka.broker_id
- and so on
Apart from providing cluster configuration the compose files have to:
- add and configure load balancer for databases (MySQL, InfluxDB)
- add InfluxDB-relay
- define network configuration, preferably containers should be added to host network
- define port configuration to avoid conflicts on localhost (development env); each node could have different set of port numbers
- disable Keystone container deployment
Docker containers communicating with Keystone should be configured with OpenStack Keystone endpoints, credentials and roles information. For example, in case of monasca-api following environment variables have to be set according to OpenStack Keystone configuration:
| Variable | Default | Description |
|---|---|---|
KEYSTONE_IDENTITY_URI |
http://${KEYSTONE_IP}:35357 |
Keystone identity address |
KEYSTONE_AUTH_URI |
http://${KEYSTONE_IP}:5000 |
Keystone auth address |
KEYSTONE_ADMIN_USER |
admin |
Keystone admin account user |
KEYSTONE_ADMIN_PASSWORD |
secretadmin |
Keystone admin account password |
KEYSTONE_ADMIN_TENANT |
admin |
Keystone admin account tenant |
AUTHORIZED_ROLES |
user, domainuser, domainadmin, monasca-user |
Roles for admin Users |
AGENT_AUTHORIZED_ROLES |
monasca-agent |
Roles for metric write only users |
READ_ONLY_AUTHORIZED_ROLES |
monasca-read-only-user |
Roles for read only users |
DELEGATE_AUTHORIZED_ROLES |
admin |
Roles allow to read/write cross tenant ID |
The containers which require Keystone related environment variables are:
- monasca-api
- moansca-log-api
- kibana
- monasca-agent
- monasca-log-agent
All Keystone related environment variables and their default values should be
collected in keystone.env file and read by the compose file, so that the user
can easily control them.
UX
User should be able to deploy the cluster by specifying IP adresses of nodes, Keystone endpoint and running compose files.
:~$ export KEYSTONE_IP=11.11.11.1
:~$ export NODE1_IP=10.10.10.1
:~$ export NODE2_IP=10.10.10.2
:~$ export NODE3_IP=10.10.10.3
:~$ export DOCKER_HOST=${NODE1_IP}
:~$ docker-compose -f docker-compose.yaml -f node1.yaml up
:~$ export DOCKER_HOST=${NODE2_IP}
:~$ docker-compose -f docker-compose.yaml -f node2.yaml up
:~$ export DOCKER_HOST=${NODE3_IP}
:~$ docker-compose -f docker-compose.yaml -f node3.yaml up
UX cluster setup on a single machine (e.g. development, testing, CI):
:~$ export KEYSTONE_IP=11.11.11.1
:~$ export NODE1_IP=10.10.10.1
:~$ export NODE2_IP=10.10.10.1
:~$ export NODE3_IP=10.10.10.1
:~$ docker-compose -f docker-compose.yaml -f node1.yaml up
:~$ docker-compose -f docker-compose.yaml -f node2.yaml up
:~$ docker-compose -f docker-compose.yaml -f node3.yaml up
Additional references
Examples of static clusters with Docker Compose:
Would be great to hear your opinion about this idea. How the network configuration should be solved? What have we missed? Thanks for your input.
Hmm. I can understand why you'd want to just use plain docker-compose since it's so much simpler than a full k8s or the like, the three-node config UX example you provided seems reasonable enough. Making an environment like this flexible enough for production deployments could be a large problem, though ... configurability in docker-compose like we have in monasca-helm may not be possible, for better or for worse.
That being said you do end up needing to work around a lot of issues (network config, service discovery, secret management...) that something like Docker Swarm provides while being less complex than Kubernetes. Out of curiosity, why isn't Swarm an option?
@witekest is on vacation, so I try to answer
The blocker issue for Swarm is the lack of support by Red Hat and SUSE. This is a problem at least for some users
Swarm is kind of similar to Kubernetes. Not sure if someone would want to use Swarm in case he does not want to use Kubernetes. Also, I never heard someone requesting Swarm support. But, I am not sure about this argument - just my guess.
In gerneral the idea is to provide Monasca with Kubernetes and a simplified alternative consisting of Docker + X. (similar to Kolla)
A static 3 node cluster seems to be a sweet spot. It offers some real simplifications over Kubernetes. On the other hand, a more complex deploymet with let's say 10 nodes, Kubernetes obviously has advantages.
The issues you mention ((network config, service discovery, secret management...) should not cause too much headaches as everything is static. In other words, e.g. we shouldn't need service discovery. WDYT?
If a static 3 node cluster is over simplified for production is a good question. Not sure about that. Can you elaborate?
The idea seems reasonable enough I think, if you're able to come up with a static deployment that meets your needs I don't see any issues merging it. I'm all for giving users more deployment options, especially when it's simpler than Kubernetes' craziness.
@witekest I would like to update, we ran in docker swarm to do cluster mode, problem is with kafka. If one kafka dies on cluster mode, while it comes up on swarm mode, it fails. Reason is kafka expose with single broker_id, so for three node setup there should be dedicated kafka for each hosts.
If kafka dies, monasca-api immediately starts to fail, there should be decent buffer time to response till kafka comes up.
No need dedicated docker-compose file apart from monasca/kafka