podman-compose icon indicating copy to clipboard operation
podman-compose copied to clipboard

Implement conditional depends_on

Open Elkasitu opened this issue 4 years ago • 10 comments

This PR implements the necessary changes so that podman-compose complies with the compose v3 spec's long syntax for the depends_on block [1].

This essentially means that services that depend on other services can be set up in such a way that they'll only start if the defined conditions are met for each service that it depends on.

[1] https://github.com/compose-spec/compose-spec/blob/0c768bea2e06d5550bc5f390bebe1f830dca8756/spec.md#long-syntax-1

Elkasitu avatar Mar 20 '22 17:03 Elkasitu

The approach I'm aiming for delegate the logic to podman using --requires

podman pod create --name mypod
podman create --pod mypod --name mypod_db
podman create --pod mypod --requires mypod_db --name mypod_web
podman pod start mypod
podman pod logs -f mypod

https://docs.podman.io/en/latest/markdown/podman-create.1.html#requires-container https://docs.podman.io/en/latest/markdown/podman-run.1.html#requires-container

muayyad-alsadi avatar Mar 21 '22 05:03 muayyad-alsadi

That only takes care of the case in which the condition is service_started or it's a short syntax depends_on, correct? We would still need the current sort logic to have the other conditions work, I think

Elkasitu avatar Mar 21 '22 08:03 Elkasitu

it's a short syntax depends_on, correct?

short syntax means require the container to be started. for long syntax current implementation is to convert all decencies to short ones

the down side of this is that it does not handle healthy that is don't just wait it to be up and but also up and healthy (based on some use defined health check)

either defined in the compose

https://github.com/compose-spec/compose-spec/blob/master/spec.md#healthcheck

healthcheck:
  test: ["CMD", "curl", "-f", "http://localhost"]
  interval: 1m30s
  timeout: 10s
  retries: 3
  start_period: 40s

of in the dockerfile

we can add a test with busybox httpd -p 8080 /etc/ and define healthcheck that wget -q -O - http://localhost:8080/hosts

I'll post my personal opinion very soon why I see those checks useless as start condition despite being very useful from operational point of view (in k8s world they differentiate between healthiness and readiness conditions)

  • livenessProbe
  • readinessProbe
  • startupProbe

and they have built in port and tcp checks

livenessProbe:
  httpGet:
    httpHeaders:
      - name: Accept
        value: application/json

startupProbe:
  httpGet:
    httpHeaders:
      - name: User-Agent
        value: MyUserAgent

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

muayyad-alsadi avatar Mar 21 '22 13:03 muayyad-alsadi

I said:

I'll post my personal opinion very soon

so let's go

first of all thank you for your contribution I really really appreciate your time and effort. I'll keep this open until we find the best way to implement it.

for context and vision please refer to our discussion here

Regarding how docker-compose implements it and how should we follow compose spec

since docker compose does not have k8s-like probes (started vs. ready vs. healthy) the main difference between ready and healthy is that, while both ready and healthy follows started a healthy later might become unhealthy

so regarding start condition, it should be with readiness not healthiness (because we don't stop the dependent one it when it later became unhealthy)

a db and a web, we start the web after db is ready if there was later a moment db was under heavy load caused the health check to report it as unhealthy for sometime we don't pull the web down as a result of that it's just a start condition maybe we ask podman to support --require=<CONTAINER>:<STATUS> where status is created, started, ready, healthy or something like this

regarding service_completed_successfully it's completely a joke in compose spec there is nothing similar to init-containers and there is no cron-jobs like as in k8s in compose spec and there is nothing wrong with this (for a single-host orchestration) because this can be done using compose profiles, cron jobs, podman exec, systemd timers ..etc but the problem is why I would run something after something that completes successfully via compose? other than swarm mode / multi-host why I would need such a thing, given that there is no init nor jobs in compose spec? in my personal opinion it's half-backed feature.

if you have a real-world use case for service_completed_successfully please let me know.

muayyad-alsadi avatar Mar 21 '22 22:03 muayyad-alsadi

maybe we ask podman to support --require=<CONTAINER>:<STATUS>

Sure, I don't really care where it's implemented as long as it works, it's currently kind of a blocker for our project since we need the feature, so we've been debating migrating to docker-compose, creating "ready scripts" or contributing the feature to podman-compose, I really wanted to do the latter because I figured other people also want the feature but didn't really have time to contribute during work hours, so I decided to implement it over the weekend, since it's already implemented we might just use my fork until a solution exists either in podman-compose or in podman.

The reasons I implemented it here are mostly:

  • That's the way it's done in docker-compose and seemed easy enough to implement
  • podman-compose as a project has probably less overhead than podman so we could get the feature shipped to rhel, fedora faster and worst-case we can just download the script and run it locally

so regarding start condition, it should be with readiness not healthiness (because we don't stop the dependent one it when it later became unhealthy)

This is a valid opinion but I'm not really looking to discuss semantics or challenge the compose spec status quo, this is just a feature that I need so I went ahead and implemented it as defined by the spec.

if you have a real-world use case for service_completed_successfully please let me know.

Yes, in fact in one of our branches in which we test docker-compose, we use a container exactly as a k8s init-container, this container loads up .sql files into our database (extensions, row-level security, etc.) and once the container exits with status code 0 we can start up our web service, there's probably many other use-cases, but again as I mentioned earlier I'm not here to challenge the compose spec status quo, even if I didn't use it I would have implemented it anyway.

Elkasitu avatar Mar 22 '22 08:03 Elkasitu

maybe we ask podman to support --require=<CONTAINER>:<STATUS>

Sure, I don't really care where it's implemented as long as it works, it's currently kind of a blocker for our project since we need the feature,

we will carry your patch until podman have a solution and then we either detect the version or branch and state dependency (let's say podman-compose 1.2 requires podman 4.5 which has the fix, if you don't have it then use podman-compose < 1.2)

I see you are interested in podman-compose that's why I'm trying to onboard you with the reasoning behind those choices.

long long ago rootless podman did not have inter container communication, in podman compose we created a pod or a container then shared the network namespace between all the containers so that all containers were able to communicate using 127.0.0.1 then we used --add-host to simulate the DNS. later podman supported network, we no longer have this workaround we keep it in 0.1.x branch for those who have legacy version.

similarly we will carry your patch until it can be done with podman.

keep in mind the final objective is that podmn-compose up -d should setup (create ...) things then ends with exec podman pod start mypod if there is a case that can't be handled by exec podman pod start mypod it should be detected and handled as an exception

currently podman-compose systemd generates a unit file that calls podman pod strat mypod please take your time checking how that systemd unit file works

https://github.com/containers/podman-compose/issues/443

because I'll recommend people to use systemd, cron jobs ...etc. I don't tell people to open screen or tmux and type podman-compose up there I tell people to register it as a service systemd --user enable --now podman-compose@myproj I would love to cover any use case (like yours)

your approach breaks this, it's more of just run it in screen.

I'll get back to read all of your points maybe this weekend.

muayyad-alsadi avatar Mar 22 '22 08:03 muayyad-alsadi

I'll get back to read all of your points maybe this weekend.

Hey no worries, take your time, I'm gonna leave some comments here but don't feel pressured into rushing your answers

we will carry your patch until podman have a solution and then we either detect the version or branch and state dependency (let's say podman-compose 1.2 requires podman 4.5 which has the fix, if you don't have it then use podman-compose < 1.2)

I'm ok with that, I think it's important that podman-compose provides this option because using the latest version and integrating it into a project is much easier than with podman, we can just install it via pip or even clone it and add it to our path. However podman likely has way more overhead and the distros that we use have very old versions of podman (Fedora 35 uses 3.4.4 and latest RHEL 3.4.2).

I see you are interested in podman-compose that's why I'm trying to onboard you with the reasoning behind those choices.

Definitely, hopefully me and my team can contribute more in the future :) I hope you didn't take my comment wrong, when I said I didn't care where it's implemented I was simply stating that for me it's fine either way

your approach breaks this, it's more of just run it in screen.

Did you mean it's not compatible? because if so I agree, if systemd units simply call podman commands then it will not and cannot work the same until this is implemented in podman

Elkasitu avatar Mar 23 '22 20:03 Elkasitu

Yes, here is the systemd unit

$ cat ~/.config/containers/compose/projects/simple.env
COMPOSE_PROJECT_DIR=/home/alsadi/proj/podman-compose/tests/simple
COMPOSE_FILE=docker-compose.yaml
COMPOSE_PROJECT_NAME=simple
COMPOSE_PATH_SEPARATOR=:

$ systemctl --user cat podman-compose@simple
# /usr/lib/systemd/user/[email protected]

[Unit]
Description=%i rootless pod (podman-compose)

[Service]
Type=simple
EnvironmentFile=%h/.config/containers/compose/projects/%i.env
ExecStartPre=-/home/alsadi/proj/podman-compose/podman_compose.py up --no-start
ExecStartPre=/usr/bin/podman pod start pod_%i
ExecStart=/home/alsadi/proj/podman-compose/podman_compose.py wait
ExecStop=/usr/bin/podman pod stop pod_%i

[Install]
WantedBy=default.target

As you can see, we just assert container creation with podman-compose up --no-start Then podman pod start mypod

It should work since we create them with --require.

muayyad-alsadi avatar Mar 23 '22 21:03 muayyad-alsadi

@Elkasitu Just FYI, I would be open for this PR to land. If it improves compatibility with docker-compose, then it should land regardless of whether there are better ways to achieve something.

There's been two years since this PR has been opened, so I understand it's likely you won't have interest in it anymore. Maybe someone else can pick this up in such case.

p12tic avatar Mar 08 '24 23:03 p12tic

@p12tic I can take a look, hopefully the conflicts won't be too much of a hassle

Elkasitu avatar Mar 11 '24 17:03 Elkasitu