[Bug?]: "docker compose run" with docker cache fails after building ( it can not find yarn installed module in cache folder)
Self-service
- [ ] I'd be willing to implement a fix
Describe the bug
I use this command to first build a docker image :
yarn exec "docker compose --env-file rioniz_initializer.env -f docker_compose_rioniz_initializer.yaml build"
Then I use following command to run it :
yarn exec "EXEC_COMMAND=initializeDatabase docker compose --env-file rioniz_initializer.env -f docker_compose_rioniz_initializer.yaml run --rm rioniz_initializer"
But it fails with this error ( it works fine with old yarn v1 but not now with yarn v4 , I guess sth in yarn cause docker compose run fails)
node:internal/modules/run_main:129
triggerUncaughtException(
^
Error: Required package missing from disk. If you keep your packages inside your repository then restarting the Node process may be enough. Otherwise, try to run an install first.
Missing package: tsx@npm:4.19.1
Expected package location: /root/.yarn/berry/cache/tsx-npm-4.19.1-aace436c49-10c0.zip/node_modules/tsx/
This is my dockerfile
# Use node alpine as the base image
FROM node:20-alpine
# Set the working directory to /app
WORKDIR /app
# Copy the necessary files for dependency installation
COPY package.json ./
COPY yarn.lock ./
COPY tsconfig.json ./
# Copy the source code into the container
COPY src ./src
# Enable production env for node
ENV NODE_ENV=production
# Install dependencies using Yarn 4
RUN corepack enable
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \
yarn --immutable
# Use a dynamic command to run the yarn initialization script defined by EXEC_COMMAND
CMD sh -c " yarn $EXEC_COMMAND"
Also this is my dependencies in package.json ( tsx is already in dependencies)
"dependencies": {
"@keycloak/keycloak-admin-client": "25.0.5",
"axios": "1.7.7",
"commander": "12.1.0",
"cron-validate": "1.4.5",
"dotenv": "16.4.5",
"dotenv-flow": "4.1.0",
"json5": "2.2.3",
"lodash": "4.17.21",
"mysql2": "3.11.2",
"node-cron": "3.0.3",
"tsx": "4.19.1",
"typescript": "5.6.2",
"yaml": "2.5.1",
"yup": "1.4.0"
},
"devDependencies": {
"@types/dotenv": "8.2.0",
"@types/jest": "29.5.11",
"@types/lodash": "4.14.199",
"@types/node": "20.11.5",
"@types/node-cron": "3.0.11",
"@typescript-eslint/eslint-plugin": "6.21.0",
"@typescript-eslint/parser": "6.21.0",
"dotenv-cli": "7.4.2",
"eslint": "8.57.0",
"eslint-config-prettier": "9.1.0",
"eslint-plugin-jest": "28.6.0",
"nodemon": "3.1.4",
"prettier": "3.3.2",
"pretty-quick": "4.0.0"
},
To reproduce
It should be reprodusible from the bug discription but if it is necessary I can also make a repository for it and share it
Environment
System:
OS: Linux 5.15 Alpine Linux
CPU: (16) x64 AMD Ryzen 7 5800H with Radeon Graphics
Binaries:
Node: 20.17.0 - /tmp/xfs-19d7b165/node
Yarn: 4.4.1 - /tmp/xfs-19d7b165/yarn
npm: 10.8.2 - /usr/local/bin/npm
Additional context
No response
If i change RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \ yarn --immutable to yarn --immutable it start working again (but becomes much slower)
It's confusing , I'm not sure how to properly cache the yarn command for subsequent build and also be able to use it next runs, on my tests yarn v1 with docker cache is actually faster than yarn v4 with no docker cache
I created a minimal branch for it https://github.com/uchar/yarnbugs/tree/yarn_docker_cache_error
clone project ,move to yarn_docker_cache_error branch then use this commands :
docker build -t yarn_bug .
yarn exec "docker run -d -e EXEC_COMMAND=test1 yarn_bug"
Check docker logs and you should see this error :
2024-09-13 23:22:31
2024-09-13 23:22:31 node:internal/modules/run_main:129
2024-09-13 23:22:31 triggerUncaughtException(
2024-09-13 23:22:31 ^
2024-09-13 23:22:31 Error: Required package missing from disk. If you keep your packages inside your repository then restarting the Node process may be enough. Otherwise, try to run an install first.
2024-09-13 23:22:31
2024-09-13 23:22:31 Missing package: tsx@npm:4.19.1
2024-09-13 23:22:31 Expected package location: /root/.yarn/berry/cache/tsx-npm-4.19.1-aace436c49-10c0.zip/node_modules/tsx/
2024-09-13 23:22:31
2024-09-13 23:22:31 at makeError (/app/.pnp.cjs:6703:34)
2024-09-13 23:22:31 at resolveUnqualified (/app/.pnp.cjs:8424:17)
2024-09-13 23:22:31 at resolveRequest (/app/.pnp.cjs:8475:14)
2024-09-13 23:22:31 at Object.resolveRequest (/app/.pnp.cjs:8531:26)
2024-09-13 23:22:31 at resolve$1 (file:///app/.pnp.loader.mjs:2043:21)
2024-09-13 23:22:31 at nextResolve (node:internal/modules/esm/hooks:866:28)
2024-09-13 23:22:31 at Hooks.resolve (node:internal/modules/esm/hooks:304:30)
2024-09-13 23:22:31 at MessagePort.handleMessage (node:internal/modules/esm/worker:196:24)
2024-09-13 23:22:31 at [nodejs.internal.kHybridDispatch] (node:internal/event_target:820:20)
2024-09-13 23:22:31 at MessagePort.<anonymous> (node:internal/per_context/messageport:23:28)
2024-09-13 23:22:31
2024-09-13 23:22:31 Node.js v20.17.0
First of all, YARN_CACHE_FOLDER has no effect here because the enableGlobalCache defaults to true since 4.0.0.
The main issue here is RUN --mount mounts a build cache, which means it is not available when running the container.
There are a few things you can do here. First, you can use the tried-and-true technique of avoiding cache invalidation by installing sooner.
FROM node:20-alpine
WORKDIR /app
+ RUN corepack enable
+
COPY package.json ./
COPY yarn.lock ./
+
+ RUN yarn --immutable
+
COPY tsconfig.json ./
-
COPY src ./src
ENV NODE_ENV=production
- RUN corepack enable
- RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \
- yarn --immutable
-
CMD sh -c " yarn $EXEC_COMMAND"
If you still want to cache packages across builds you can use both the global cache and the local cache by setting enableGlobalCache to false (yes, the config name is a misnomer). That way, you can share the global cache across builds to avoid re-downloading packages, and resolve packages from the local cache during runtime.
- RUN yarn --immutable
+ RUN --mount=type=cache,target=/root/.yarn YARN_ENABLE_GLOBAL_CACHE="false" \
+ yarn --immutable
Hi! 👋
It seems like this issue as been marked as probably resolved, or missing important information blocking its progression. As a result, it'll be closed in a few days unless a maintainer explicitly vouches for it.