Cache Save/Restore is not working between GitHub Runner and Container, even with the same absolute path.
I'm trying to cache files in a standard runner and then access them in a container, but the files aren't being restored as expected.
I noticed the issue mentioned in #1444 (which would be great to have fixed) and read through the guidance in the cross-os-cache docs. From what I understand, the full absolute path for saving and restoring the cache must be identical across environments. In an attempt to address this, I tried placing the files in /tmp/, but even when the full absolute path matches and I've set enableCrossOsArchive: 'true', it still doesn't work. You can see the failure in this run: https://github.com/sabbott1877/cache-issues/actions/runs/10608423120/job/29402474270.
name: Cache Issue Testing
on:
push:
branches:
- '**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
create-cache:
name: Setup Cache
runs-on: ubuntu-latest
steps:
- run: |
echo ~
echo $PWD
# Trying to use /tmp to avoid issues with absolute paths, but still seems to be failing.
- name: Create files
run: |
mkdir -p /tmp/cache-path/
echo "test" > /tmp/cache-path/file.txt
- name: Cache Files
uses: actions/cache/save@v4
with:
path: /tmp/cache-path
key: cache-path-${{ github.run_id }}
enableCrossOsArchive: 'true'
read-cache:
name: Read Cache
needs: create-cache
runs-on: ubuntu-latest
container:
image: rockylinux:9
options: -u root
steps:
- run: |
echo ~
echo $PWD
- name: Get Cached Files
uses: actions/cache/restore@v4
with:
path: /tmp/cache-path
key: cache-path-${{ github.run_id }}
enableCrossOsArchive: 'true'
fail-on-cache-miss: 'true'
Can confirm the issue: have a similar one in one of my projects
@sabbott1877 I've found the reason of the issue.
The problem is that despite the fact that the same keys are used, cache versions for the container and ubuntu-latest Github runners are different: https://github.com/actions/cache?tab=readme-ov-file#cache-version
Cache version depends on used compression method https://github.com/actions/toolkit/blob/6c4e082c181a51609197e536ef5255a0c9baeef7/packages/cache/README.md?plain=1#L15 and default ubuntu-latest uses zstd to compress data.
If a container doesn't have zstd installed, gzip will be used instead https://github.com/actions/toolkit/blob/6c4e082c181a51609197e536ef5255a0c9baeef7/packages/cache/src/internal/cacheUtils.ts#L100
The solution for me was to add zstd into the container, and after that I can use caches between my container and GitHub runner.
A better solution might be to allow to explicitly set a compression method via the action parameters.
@vkuptcov can you sahre what its mean to install zstd what is missing in the pod
can you sahre the steps?
sudo apt-get -y install zstd
thanks
ya its working !!!
thanks for the help !
Thank you for digging in further and finding a solution! I haven't been able to test it yet, but it seems plausible. I would have thought enableCrossOsArchive: 'true' would have handled this, but maybe that's just focused on windows/linux cross platform.
It looks like enableCrossOsArchive: 'true' just tries to solve cross platform issue.
And it doesn't solve even the issue when an absolute path not in a workdir is used. If workdirs in a container and in a native runner have different nesting level, the archive is restored not in an absolute path, but in a relative.
In general it looks like the interoperability part for this GitHub action is not well designed/implemented yet, there are too many unspecified edge cases.
Yeah it's really bad. I'm having issues with it too. I read in another issue that making sure your relative path starts with ./ fixes some issues but I haven't had a chance to directly investigate that.
See the issue to which I linked in actions/toolkit.
Just ran into this problem too when trying to restore a node_modules cache in job that uses a Playwright container image.
I solved it by adding the following step before using the cache action:
- name: Install zstd
run: apt-get update && apt-get install -y zstd
@breadadams I tried it but didn't work for me
We're too seeing cache restoration issue, however in our case it appears that both runner type and container have effect on the result.
I'd done 3 tests:
| Workflow Runs | Job in GH Runner (v2.321.0) | Job in Ubicloud Runner (v2.320.0) |
|---|---|---|
| on runner host | ❌️ | ❌️ |
| in Alpine container | ❌️ | ✅️ |
| in Alpine container + zstd | ❌️ | ❌️ |
GitHub runner details
Current runner version: '2.321.0'
Operating System
Ubuntu
2.04.5
LTS
Runner Image
Image: ubuntu-22.0
Version: 20241124.1.0
Included Software: https://github.com/actions/runner-images/blob/ubuntu22/20241124.1/images/ubuntu/Ubuntu2204-Readme.md
Image Release: https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20241124.1
Ubicloud runner details
Current runner version: '2.320.0'
Runner name: [redacted]
Runner group name: 'Default'
Machine name: 'vme64rke'
Operating System
Ubuntu
22.04.5
LTS
Runner Image
Image: ubuntu-22.04
Version: 20241016.1.0
Included Software: https://github.com/ubicloud/runner-images/blob/ubuntu22/20241016.1/images/ubuntu/Ubuntu2204-Readme.md
Image Release: https://github.com/ubicloud/runner-images/releases/tag/ubuntu22%2F20241016.1
Ubicloud Managed Runner
Name: [REDACTED]
Label: ubicloud-standard-4
Arch: x64
Image: github-ubuntu-2204
VM Host: [REDACTED]
VM Pool:
Location: github-runners
Datacenter: FSN1-DC17
Project: [REDACTED]
Console URL: [REDACTED]
A workaround in our case was to make sure all jobs that share cache - run in the same type of runner.