docker-compose-buildkite-plugin icon indicating copy to clipboard operation
docker-compose-buildkite-plugin copied to clipboard

Running migration steps or multiple compose

Open pecigonzalo opened this issue 6 years ago • 10 comments

Hi! we have some integration tests that require some database migrations, i wonder what is the recommended pattern for this (maybe outside this plugin scope).

I tried doing:

[...]
- label: ":hammer: Integration"
    plugins:
      - docker-compose#v2.1.0:
          run: migrator
          tty: false
          config:
            - docker-compose.yml
      - docker-compose#v2.1.0:
          run: integrator
          tty: false
          config:
            - docker-compose.yml
[...]

but since the images are cleaned up after the plugin runs, im left with no migrated DB.

We have a depends'_on in the compose file, but that is no guarantee the migration is actually finished.

I thought of using the integrator to query the DB or perform the migration, but to be honest, that seems rather awkward and not 1to1 on how the migration is run outside of Integration, not even close.

pecigonzalo avatar Mar 06 '19 17:03 pecigonzalo

Hi there! It's a good question 🤔

Could you provide a minimal docker-compose.yml along with those pipeline.yml steps, so we can reproduce it locally and think about what a solution might look like?

toolmantim avatar Mar 07 '19 03:03 toolmantim

Sure thing: docker-compose.yml:

version: "2"

networks:
  app:
  db:

services:
  app:
    image: "nginx"
    depends_on:
      - migrator
      - db
    networks:
      - app
      - db

  migrator:
    image: "boxfuse/flyway:5-alpine"
    command:
      [
        "-url=jdbc:postgresql://db:5432/",
        "-user=postgres",
        "-password=mysecretpassword",
        "-connectRetries=60",
        "migrate",
      ]
    volumes:
      - ./migration:/flyway/sql
    depends_on:
      - db
    networks:
      - db

  db:
    image: postgres:9.6
    environment:
      POSTGRES_PASSWORD: mysecretpassword
    ports:
      - 5432
    networks:
      - db

docker-compose.it.yml:

version: "2"

services:
  integrator:
    image: "appropriate/curl"
    command: ["http://app/"]
    depends_on:
      - app
    networks:
      - app

migration/V1__create_users_table.sql:

create table users (
  id UUID primary key,
  name text not null,
  email varchar(100),
  dob date
);

We assume app connects to the DB and integrator performs a query that requires this app to connect to the DB.

Pipeline:

steps:
  - label: ":hammer: SBT Integration"
    plugins:
      - docker-compose#v2.1.0:
          run: migrator
          config:
            - docker-compose.yml
      - docker-compose#v2.1.0:
          run: integrator
          config:
            - docker-compose.yml
            - docker-compose.it.yml

Assuming the migration is fast enought we can avoid the first plugin declaration and rely in the docker-compose.it.yml depends_on:, but this is not true in many cases.

pecigonzalo avatar Mar 07 '19 11:03 pecigonzalo

I think this is a generic issue aside of migrations tho, where we might want to run multiple docker-compose compands, and I love the ability of the plugin to append a project name, and cleanup at the end of the step, but its not flexible enought in many cases. It would be great to be able to run each step without && even, but I dont want to mix the 2 topics.

pecigonzalo avatar Mar 07 '19 11:03 pecigonzalo

I updated to v3.0.0 and I cant reproduce anymore with the given example. Ill confirm if this is still relevant or not, as seems like with v3.0.0 I can indeed run multiple plugins.

pecigonzalo avatar Mar 11 '19 11:03 pecigonzalo

OK, validated, this is still a problem in v3.0.0 the error is related to the container of a run remaining after the first plugin run, and then causing a container with the name thisthat already exists. Maybe a param to run with --rm on run could help, but the tricky part is then getting the logs.

pecigonzalo avatar Mar 11 '19 15:03 pecigonzalo

Thanks for the example you posted! What would be the manual docker-compose commands you would run for your example, if it were just a manual bash script?

The integrator pattern looks very similar to what other people use wait-for-it.sh and friends for: https://docs.docker.com/compose/startup-order/

Might you be better taking that approach, with a single service that uses depends_on and knows how to wait for its dependent service to become available?

toolmantim avatar Mar 11 '19 21:03 toolmantim

Hey, we would run migrations for some of the containers and maybe some initial assertions. EG: I want to check the server is on and ready, or put a file that is in other systems generated by an async service or scheduled run.

We could use the wait-for-it and depends on workflow yeah, altho via a separate container as tbh, bundling psql inside your container, just for having a way of waiting is imho a bad practice. Adding it as a 3rd container that waits is quite complex as compose does not have a wait for exit or similar command, only wait for the start or healthy (which was removed in compose v3).

The problem is, compose in many cases, is used only for testing, the real environment might be ECS/Kubernetes/Rancher/etc and they might have their own logic or ways of restarting, so cramming all this logic on compose just for this makes it really more complex.

A bit out of topic but in general, BuildKite should allow for multiple commands per step, so they run in the same environment/agent and then exit. This gets quite more complex when dealing with plugins, but still, it would be amazing to be able to combine a plugin+system command.

pecigonzalo avatar Mar 12 '19 09:03 pecigonzalo

Thanks for more context!

I hear you about not wanting to mess up the simple config. Sometimes it can make sense to have an extra “CI” docker-compose file, and then use the config plugin option with the value ["docker-compose.yml", "docker-compose.ci.yml"], which would be equivalent to docker-compose -f docker-compose.yml -f docker-compose.ci.yml ... — I’m not assuming that’s suitable for you though, but just wanted to mention it.

So we can try to figure out how we could actually implement/support your workflow, in an ideal scenario, what would be the manual docker-compose commands that you’d run from your own bash script, for those docker-compose configs you posted?

toolmantim avatar Mar 12 '19 09:03 toolmantim

Yeah, we do have a CI compose file, i simplified for the sake of the example.

pecigonzalo avatar Mar 12 '19 10:03 pecigonzalo

@toolmantim I added a PR that is related to this. Adding the --rm option allow multiple runs of the plugin, you get more logs, due to how logs are collected. I added the options as true by default, as in most cases this is the desired behavior of a run in my experience.

In a manual scenario I would run for example:

docker-compose -f this.yml -f that.yml up -d

docker-compose -f this.yml -f that.yml run --rm -T migrator 
# This is a flyway migrate in our case, and could even be a docker run -v $(pwd):/migration flyway --dbconnection

docker-compose -f this.yml -f that.yml run --rm -T integrator wait serviceA
docker-compose -f this.yml -f that.yml run --rm -T integrator wait serviceB
docker-compose -f this.yml -f that.yml run --rm -T integrator run

This is similar to what we run locally with a Makefile. We could wrap all the service waits and migrations in the integrator run, but I believe that is actually making things more complex just to fit a limitation on the CI plugin, rather than making them simpler.

pecigonzalo avatar Mar 12 '19 10:03 pecigonzalo

From what I can see, the --rm option would take care of most of the problematic portion of the scenario. If it isn't so please re-open the issue and we can continue to work on how to best adapt to it

toote avatar Sep 21 '22 03:09 toote