Cache-server memory utilization
We have observed that the application consumes significant memory during runtime. Initially, memory usage appears normal at startup, but once the workflow begins utilizing the cache-server, there is a noticeable increase in memory consumption.
As you can see from the image below the majority of the memory is of type non-evictable.
Is there a specific reason for this behavior? Could it be that the application is retaining something in memory after caching operations?
Thanks
Thanks for reporting this! 🙏
I'll look into it.
Could you please share your cache server hosting setup (docker-compose.yml or kubernetes) and maybe some workflow files?
Hi @LouisHaftmann, sorry for the delay.
We are running the cache server in Kubernetes and below you can find the deployment configuration that we are using.
apiVersion: apps/v1
kind: Deployment
metadata:
name: gha-cache-server
labels:
app.kubernetes.io/name: gha-cache-server
app.kubernetes.io/instance: gha-cache-server
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 1
maxSurge: 0
selector:
matchLabels:
app.kubernetes.io/name: gha-cache-server
app.kubernetes.io/instance: gha-cache-server
template:
metadata:
labels:
app.kubernetes.io/name: gha-cache-server
app.kubernetes.io/instance: gha-cache-server
spec:
containers:
- name: gha-cache-server
image: ghcr.io/falcondev-oss/github-actions-cache-server:3.1.0
ports:
- name: http
containerPort: 3000
protocol: TCP
env:
- name: URL_ACCESS_TOKEN
value: "cache"
- name: API_BASE_URL
value: "http://gha-cache-server.arc-system.svc.cluster.local"
- name: DEBUG
value: "false"
- name: CLEANUP_OLDER_THAN_DAYS
value: "7"
resources:
requests:
memory: 1Gi
cpu: 100m
limits:
memory: 3Gi
cpu: 500m
volumeMounts:
- name: cache-data
mountPath: /app/.data
volumes:
- name: cache-data
persistentVolumeClaim:
claimName: gha-cache-server
It is quite hard to share the workflows because they are a mix of custom action and reusable workflow. The main configuration we are using is the setup-node@v4 action with a yarn install command to install all the dependencies, like the below example.
- name: "Setup node${{ inputs.node-version }} and GitHub npm registry"
uses: actions/setup-node@v4
id: setup-node
with:
node-version: ${{ inputs.node-version }}
cache: yarn
registry-url: "https://npm.pkg.github.com"
env:
NODE_AUTH_TOKEN: ${{ inputs.npm-github-registry-token }}
- name: "Install dependencies"
shell: bash
run: yarn install --quiet --no-progress --frozen-lockfile
We also have some workflow for PHP that uses the action actions/cache@v4, below is another example
- name: "Cache dependencies"
uses: actions/cache@v4
with:
path: ${{ steps.composer-cache.outputs.dir }}
key: ${{ runner.os }}-composer-${{ hashFiles('./composer.lock') }}
restore-keys: ${{ runner.os }}-composer-
- name: "Install dependencies"
shell: bash
run: |
composer install
Probably the main thing that I didn't mention is that we have more than 120 different caches from different workflows and repositories.
Let me know if other information are required and thanks for the support!
Have you tried using postgres or mysql instead of sqlite?
Actually not, I'll give it a shot and get back to you, thanks!
Hi @LouisHaftmann, we are keeping it monitored but moving to an external postgres instance definitely helped, thank you!
No problem 🙏
I will close this for now. Feel free to reopen if you notice anything.