full-stack-fastapi-template icon indicating copy to clipboard operation
full-stack-fastapi-template copied to clipboard

Clarify the deployment process

Open laurentS opened this issue 5 years ago • 13 comments

I was very impressed by this template project, until I tried to deploy it to my single server swarm. This was my first time playing with docker swarm, and I had not realised that all deployment commands are meant to be run on a node of the swarm (I only saw this confirmed in a random stackoverflow post, did I miss it in some obvious doc?). I think the generated gitlab-ci.yml file further misled me in that it gave me the impression I could just push it to gitlab and be happy :smile:

I easily setup the swarm following the great tutorial at https://dockerswarm.rocks/ but then was a bit at a loss with how to connect the dots between my generated project and the swarm.

My setup is super basic: I'm trying to deploy to a single node swarm from a gitlab.com CI script (running on their shared ci runners). It would have helped me to see these extra bits of info:

  • the docker stack deploy command must be run from a node inside the swarm
  • what options I have to deploy if the machine running the deploy.sh script is not part of the swarm:
    • temporarily join the swarm? I wasn't excited to do this, since my ci is running on shared machines.
    • setup docker to listen on HTTP to run docker stack deploy from outside the swarm? I tried this but it left me with traefik not being able to connect to the docker daemon (Provider connection error Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? in the service's logs), and I could see how this would cascade into a series of problems as the project template is probably not designed for it.
    • scp the required files onto one of the swarm nodes then ssh into it and run deploy.sh? I ended up going for this solution, which is probably not the cleanest, but has the advantage that you can then deploy from any machine that can ssh into your swarm.

So 3 questions:

  • did I miss something in the docs that would have explained this?
  • if not, did I miss any easier/better option to deploy?
  • if not, would you be interested in a small PR explaining what I did? I feel like the whole tutorial between here and dockerswarm.rocks is super well written and helpful, but this particular part left me completely at a loss (but I did learn a lot from reading around, on the plus side :wink: )

laurentS avatar Feb 03 '20 17:02 laurentS

I have the same concern as you, so I'll also subscribe to the discussion.

BTW, did you follow this Readme.md (Deployment part) during your investigation?

ascherbakhov avatar Feb 06 '20 23:02 ascherbakhov

I agree that there could be a section for local development in the doc. Here are a few ideas to help you get started... maybe you could help the project in return with a PR once you get everything running :)

The key is the COMPOSE_FILE variable in the .env file. It specifies which docker-compose*.yml files to use in your composition, in particula the docker-compose.dev.*.yml files

When loading the content of the .env file in your bash, you are able to run the standard docker-compose commands (e.g pulll, build, up, logs, and so on...)

This will run a version of you backend locally (and that is also what will be used for testing purpose)

On my side, I use a Makefile to smoothen the process, I have added a lot of other commands to facilitate my like, like make pipenv or make lint to respectively load a virtual environment or run the lint.sh process with my custom settings.

Anyways, here is the foundation :

# Light wrappers around docker-compose. the variable 'services' can be used to select one 
# or many specific services to run. E.g.: services=backend make logs in order to display the 
# logs of the backend only

ps:
	docker ps --format 'table {{.Image}}\t{{.Status}}\t{{.Ports}}\t{{.Names}}'

pull:
	docker-compose pull $(services)
	docker-compose build --pull $(services)

up: check-env
	docker-compose up -d $(services)

down:
	docker-compose down

stop:
	docker-compose stop $(services)

logs: check-env
	docker-compose logs --tail 20 -f $(services)

build: check-env
	docker-compose up --build -d $(services)

test: check-env
	docker-compose exec backend-tests /tests-start.sh $(args)

lint:
	cd backend/app && \
		pipenv run bash scripts/lint.sh $(target)

###
# Helper for initialization

check-env:
ifeq ($(wildcard .env),)
	cp .sample.env .env
	cp frontend/.sample.env frontend/.env
	@echo "Generated \033[32m.env\033[0m"
	@echo "  \033[31m>> Check its default values\033[0m"
	@exit 1
else
include .env
export
endif

ebreton avatar Feb 07 '20 06:02 ebreton

@ebreton , thank you very much. But sorry, actually I had another concern. It seems, @laurentS asks about this free shared runners on gitlab machines, but I use my own runner on digitalocean machine.

I've also recently started, but my issue is about using secrets in dotenv files. I cannot commit passwords and secrets to repository, but if I remove it from env files, how should I set up it in deployed docker container? Or maybe I shouldn't commit env files at all?

ascherbakhov avatar Feb 07 '20 08:02 ascherbakhov

@ebreton this is a neat Makefile, thanks for sharing! My question was indeed more focused on interacting with the swarm for production, given that this is the setup of choice. What was missing in the tutorial in my mind was "now you've got this app running locally, and you've got a swarm set up on a (or several) server(s), here's how you connect the dots". I wouldn't mind creating a PR with a bit of write up about how I did things, this ticket was mostly to check that it would be useful and that I hadn't missed anything obvious, as I'm new to docker swarm.

laurentS avatar Feb 07 '20 10:02 laurentS

Hi @ascherbakhov

Or maybe I shouldn't commit env files at all?

Indeed, those files are better out of the repo. That's why I have a .sample.env instead. This one is committed, and copied by the check-env command in the Makefile.

When you install a new stack, you can create your specific .env file on your node

ebreton avatar Feb 07 '20 15:02 ebreton

My question was indeed more focused on interacting with the swarm for production, given that this is the setup of choice. What was missing in the tutorial in my mind was "now you've got this app running locally, and you've got a swarm set up on a (or several) server(s), here's how you connect the dots".

You are right. I am also relying on the Makefile for this (even though it is getting too fat to my taste). Anyways, I have some push-[env] and deploy-[env] commands that I use when I am satisfied with my local version of the application.

It makes use of tiangolo scripts.

The trick is to properly override the variable environments, and to understand where + when to run the commands.

Here is the section in the Makefile:



# push-[env] commands are run on the development host. It basically everything needed for a
# release and pushed the new image to docker hub.
#
# In order to follow the best practices, the same image will be used in all environments,
# once validated appropriately (dev -> qa -> prod)
#
# This means there are very few environment variables to override in this step
# Logically, only the docker TAG, but there is FRONTEND_ENV wich also differs

push-dev: login
	cp frontend/.env-dev frontend/.env
	TAG=latest FRONTEND_ENV=development bash scripts/build-push.sh

push-qa: login
	cp frontend/.env-qa frontend/.env
	TAG=qa FRONTEND_ENV=staging bash scripts/build-push.sh

...

# deploy-[env] commands are run on the swarm manager. It is the moment where the environment
# variables are set, for a docker image. That is why the make command makes sure that the
# appropriate .env file exits, and points it as .env
#
# Additionnaly to the values in this .env file (e.g. the postgres connection string), the make 
# commands also overrides some more variables (not sensitive like passwords, that therefore can
# be committed)

deploy-dev: login
ifeq ($(wildcard .env-dev),)
	@echo "\033[31m>> Create .env-dev first\033[0m"
	@exit 1
endif
	cp .env .env-backup && rm .env && ln -s .env-dev .env
	source .env && DOMAIN=backend-dev.cortexia.io \
		BACKEND_CORS_ORIGINS=http://localhost:3000,https://web-dev.cortexia.ch... \
		STACK_NAME=backend-dev \
		TAG=latest \
		bash ./scripts/deploy.sh

deploy-qa: login
ifeq ($(wildcard .env-qa),)
	@echo "\033[31m>> Create .env-qa first\033[0m"
	@exit 1
endif
	cp .env .env-backup && rm .env && ln -s .env-qa .env
	source .env && DOMAIN=backend-qa.cortexia.io \
		BACKEND_CORS_ORIGINS=https://web-qa.cortexia.ch... \
		STACK_NAME=backend-qa \
		TAG=qa \
		bash ./scripts/deploy.sh

...

# Helper commands

login:
	docker login

To summarize:

  1. development [local] : make build up logs
  2. validation [local] : make test lint
  3. release [local] -> to dockerhub : make push-dev
  4. deployment [swarm] <- from dockerhub : make deploy-dev

Next step for you will be to automate this process... I am using docker hub to run the tests automatically and trigger a web hook in portainer (if build + tests successfull), which in turn update the image of a running container

ebreton avatar Feb 07 '20 16:02 ebreton

Thanks for these explanations and sharing your code @ebreton! I've set up deployment with gitlab CI, based on the template that came with this cookiecutter. Basically, 3 steps, and abort if anything fails:

  • run tests on the CI runner
  • build docker images and push to private image repository (on gitlab)
  • deploy new stack to docker swarm (which the CI runner is not part of) I'm still understanding how the swarm mode works, but it looks very promising for my reasonably limited needs.

laurentS avatar Feb 07 '20 17:02 laurentS

Thank you a lot! Could you give me a piece of advice, where do you store production .env files if you don't commit it? I've uploaded backend.env and other env files to a separate folder on production machine and then CI script after checkout copies them to the project folder. But what is good practice about .env files?

ascherbakhov avatar Feb 08 '20 14:02 ascherbakhov

@ascherbakhov I guess the question was not aimed at me, but what I usually do is try to follow 12 factor principles. For config like you mention, at least when I deploy from CI, I use the CI's environment variables to pass those values to the CI process which in turn passes them to the deployed instance. For instance in this gitlab-ci.yml line you could replace the right side with an env var that you set in gitlab's CI settings. This allows you to deploy to production without having any secrets (as in, passwords, api keys...) in your code. For the non-secret stuff (server urls, etc...), I would collect all those settings in a single file, so that if you're porting your app, you just need to change that one file. Happy to hear what others are doing! (though we're probably off-topic for this repo ticket now)

laurentS avatar Feb 08 '20 15:02 laurentS

@laurentS This has been a useful thread, thanks!

I wanted to check with you, did you install GitLab on a machine in your swarm? Or did you use the GitLab image repository (at GitLab.com) to store your images? It's my first time doing this part of deployment.

So far I have been building images manually on all swarm nodes so they are available locally (as described in the docs).

Finally, have you gotten to the point since your wrote the last post of investigating scaling the backend and frontend services? I wrote issue #264 asking about this.

danieljfarrell avatar Sep 14 '20 08:09 danieljfarrell

Hi @danieljfarrell I'm not going to be much help here, but what I can tell you:

  • I use the gitlab.com image registry, and have setup one of my servers (which is part of the swarm) as a gitlab CI runner (no need to install all of gitlab on your machine) that builds the image, and uploads it to the registry. Then my docker config pulls images from the registry. I guess the only advantage over building images locally is that you build once, and then download them from each server. It was a while back, but I remember that using the private registry was a bit of a headache because of authentication and all...
  • on scaling, I have not needed it yet, so can't help, sorry!

laurentS avatar Sep 14 '20 12:09 laurentS

Hi @laurentS , thanks for the answer. I see that your dedicated server is building the image, in my case AWS Lightsail is to weak for this operation. So in theory I can use GitLab to build an image, and the using GitLab runner on my server just pull down and update images?

PetrShchukin avatar Feb 07 '21 20:02 PetrShchukin

Thank you a lot! Could you give me a piece of advice, where do you store production .env files if you don't commit it? I've uploaded backend.env and other env files to a separate folder on production machine and then CI script after checkout copies them to the project folder. But what is good practice about .env files?

@ascherbakhov I hade the same question. It's quite frustrating for a newbie to set up the CI/CD pipeline...

The current solution I found for Gitlab CI (source) :

  • create a CI/CD Variable let's say "DOTENV" and add the content of the .env file (which is not under version control), just paste the whole file into the value-field.
  • I had to uncheck Protected Variable
  • in the gitlab-ci.yml add the line - echo "$DOTENV" > .env to the section before script
  • so the .env file is created every time the pipeline runs and is available for docker-compose to create the docker-stack.yml file Don't know if that's the best solution but it works :-)

t11c avatar Mar 02 '21 15:03 t11c