Deploying from self-hosted
I think there might be an issue with using self-hosted runners to deploy caprover tar files.
Here is what I have:
name: Build & deploy
on:
workflow_dispatch:
push:
jobs:
build-and-deploy:
name: Build & Deploy
runs-on: self-hosted
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Install Node.js
uses: actions/setup-node@v3
with:
node-version: 18
- uses: pnpm/action-setup@v2
name: Install pnpm
id: pnpm-install
with:
version: 8.5.0
run_install: false
- name: Get pnpm store directory
id: pnpm-cache
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path)" >> $GITHUB_OUTPUT
- uses: actions/cache@v3
name: Setup pnpm cache
with:
path: ${{ steps.pnpm-cache.outputs.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
- name: Install dependencies
run: pnpm install
- name: Build
run: pnpm turbo run build --filter web
- uses: a7ul/[email protected]
with:
command: c
cwd: './apps/web'
files: |
build/
captain-definition
nginx.conf
Dockerfile
outPath: deploy.tar
- name: Deploy App to CapRover
uses: caprover/[email protected]
with:
server: '${{ secrets.CAPROVER_SERVER }}'
app: '${{ secrets.APP_NAME }}'
token: '${{ secrets.APP_TOKEN }}'
This workflow fails with the following message:
Preparing deployment to CapRover...
Ensuring authentication...
Deploying *** to https://captain.site.cyka.info.../
Something bad happened: cannot deploy *** at https://captain.site.cyka.info./
ENOENT: no such file or directory, stat '/github/workspace/deploy.tar'
Meanwhile, if you try to run the same thing on a github provided runner like ubuntu-20.04, it deploys without any problems.
I think this might be a misconfiguration on my end, where caprover is looking for the deploy.tar file inside of
/github/workspace instead of github/workspace/project_name/project_name
Hmm... not sure if it's related to CapRover. There is no directory done in CapRover.
The CLI just reads the deploy.tar from the current working directory:
https://github.com/caprover/deploy-from-github/blob/4f2b50c37be9f3f325c67b16660e321395841040/entrypoint.sh#L9-L11
One thing you can try is changing cwd: './apps/web' to cwd: './' - does it help?
Hmm... not sure if it's related to CapRover. There is no directory done in CapRover.
The CLI just reads the
deploy.tarfrom the current working directory: https://github.com/caprover/deploy-from-github/blob/4f2b50c37be9f3f325c67b16660e321395841040/entrypoint.sh#L9-L11One thing you can try is changing
cwd: './apps/web'tocwd: './'- does it help?
I'm not sure I understand.
The cwd is specified for the archive action and not caprover. The output of that step is a deploy.tar file at /github/workspace/project_name/project_name/deploy.tar.
Would adding a cwd option to the caprover action in the workflow file have different effect? Well, I guess I try doing that first thing tomorrow and report back the results.
cwd option is not available for CapRover deploy action. If you look at the command I pasted above, you'll see that CapRover uses the current directory for deploy.tar - whereas your deploy.tar is stored elsewhere.
@Fractal-Tess Did you manage to make it work? I'm asking because I'm also considering using a self-hosted GH action runner..
@Boscop I did fix it, yeah. If irc, it was an issue with the pathing inside of the self-hosted runner. There was an issue with where the deploy file was being created.
Sadly I have since moved to other services and I no longer use Caprover, so I'm unable to assist you in that regard.
@Fractal-Tess Thanks for your quick reply.
I'm curious, which PaaS are you using now?
I'm using Coolify now, but honestly, if a couple of features were added to Caprover, I'd come back in an instant.
@Fractal-Tess - what features? And why do you come back?
Hey @githubsaturn o/
what features? My biggest blocker is the tunability to deploy multiple containers at once. For example, I would love it if Caprover could implement a way for me to deploy straight from docker-compose. For instance, Coolify does that and it does it pretty well and not at the same time. I'm not sure if you @githubsaturn have ever tried using it, but here is a simple screenshot of what I mean
After a service has been added, we get to this screen.
Where we see each service and can make changes to it.
I don't mean for Caprover to directly copy from Coolify, although that wouldn't be too bad since I feel like this would be extremely useful for Caprover users, instead try to improve upon it.
For instance, the way domain/port/volume mapping to the services is extremely unintuitive, in my opinion. I feel like there is much room for improvement on that end
Also, another great feature I love about Coolify is the ability to control multiple servers at once (although it doesn't work half of the time). I find it incredibly useful to have a single interface to control other servers and what is on them without having to install the application manager itself (Coolify/Caprover). It allows me to join many small machines and have control over them without the overhead of them having to run something like Coolify or Caprover. Think of it as having them be managed by the Coolify/Caprover.
And why do you come back?
I'm not sure what that is supposed to mean. Perhaps you meant to write it out in another way, but truth be told - I love Caprover and I've been using it for the last 3 years. I only switched over to Coolify because I reinstalled my Server and wanted to try something different. So far, it's proven to have many more features than I expected, but it's harder to use in some cases.
Thanks for the feedback! As for "And why do you come back?" I wanted to learn what CapRover does better so we don't end up losing the greatness :)
Thanks for the feedback! As for "And why do you come back?" I wanted to learn what CapRover does better so we don't end up losing the greatness :)
Of course. Caprover has been fantastic. It has hands down the best one-click apps library, no questions asked. Port mapping, volume mapping (bind or named), scaling with docker swarm, having nginx reverse proxy, and clear communication with two containers are some of the things I find lacking in Coolify - Caprover is just superior on that regard.
@Fractal-Tess Thanks for sharing your experience :)
Also, another great feature I love about Coolify is the ability to control multiple servers at once (although it doesn't work half of the time). I find it incredibly useful to have a single interface to control other servers and what is on them without having to install the application manager itself (Coolify/Caprover). It allows me to join many small machines and have control over them without the overhead of them having to run something like Coolify or Caprover. Think of it as having them be managed by the Coolify/Caprover.
I'm curious, have you tried Dokploy? It also supports docker-compose :)
I've been trying all 3. Coolify was very slow on my 1 GB RAM google cloud VM, becoming unresponsive and causing Gateway Timeout, whereas CapRover and Dokploy were faster. I'm exploring Dokploy now because I want to use EdgeDB together with my app in a docker-compose.yml file.
You have a very good point there.
It is indeed slow and bulky because of the PHP runtime (I assume). However, since my server is quite beefy (4vcpu 24 ram), it doesn't make that much of a dent in my experience. I would happily trade some performance and memory for a good DX. Dokoploy seems like a trimmed-down version of Coolify. Haven't tried it, since I'm giving Coolify a chance right now, but it seems like they are very very similar.
Talking about servers and efficiency is another topic. I do 100% agree that something simpler like Caprover or Dokploy is better suited for small-sized VMs. Thinking about this, I probably would not even use Caprover or Dokploy because of the nodejs runtime overhead.
Maybe someday we are going to get a rust-based alternative, and I'll be the first to jump ship
FWIW, CapRover uses ~0.06 GB of RAM and next to zero CPU - there is virtually no performance gain by going to rust / golang. It's because all heavy workload is offloaded to resource efficient dependencies such as Docker and nginx.
@Boscop - deploying a group of apps using Docker compose is fairly doable on CapRover:
https://caprover.com/docs/docker-compose.html#how-to-run-docker-compose-on-caprover
FWIW, CapRover uses ~0.06 GB of RAM and next to zero CPU - there is virtually no performance gain by going to rust / golang. It's because all heavy workload is offloaded to resource efficient dependencies such as Docker and nginx.
That is absolutely based my friend. You have created something amazing and I truly respect that. Still though, that 60 mb is about 12% of a 512 mb sized vm, or 23% of a 256 one. Now, don't get me wrong - this is pretty amazing, and most likely no one would care about getting ~15% additional ram when using one of these services.
Honestly, If i was even as little bit as talented as you, I would go ahead and start my own one with Rust instead. Sadly that is out of my reach.
I also would have loved to try and contribute to Caprover, however, the codebase is kind of hard to read at least for me (novice programmer - still a student).
By the way @githubsaturn, have you thought about switching from netdata to Prometheus/Graphana instead? Recently I have had the pleasure of using fly.io, and their analytics platform is amazing.
Each project you create over at fly.io, has it's own powered Graphana metrics webui which I find very very sweet. Also, netdata is quite heavy AFAIK - please correct me if I'm wrong about this.
Yes, but they are not a replacement for Netdata! Grafana is simply just a visualization tool. You still need a data collection tool like Promethues. On top of being operationally more complex compared to an all-in-one solution, speaking of memory usage, Prometheus uses close to 1GB of ram in production environment making it a no go for budget friendly machines.
