Error on restart: tail: can't open '/.devspace/screenlog.0': No such file or directory
What happened?
When running devspace dev for the first time, the logs are properly attached to the pod and I can see the application's output. When using the onUpload parameter to automatically restart the container, I see the following error after the all consequent restarts:
tail: can't open '/.devspace/screenlog.0': No such file or directory
successfully shut down api // <-- actual logs from the app
bye. // <-- actual logs from the app
############### Restart container ###############
tail: can't open '/.devspace/screenlog.0': No such file or directory
tail: no files
What did you expect to happen instead?
I expect the logs to be displayed in the pod log stream even after a restart
My devspace.yaml:
version: v2beta1
name: api
# This is a list of `pipelines` that DevSpace can execute (you can define your own)
pipelines:
# This is the pipeline for the main command: `devspace dev` (or `devspace run-pipeline dev`)
dev:
run: |-
run_dependencies --all # 1. Deploy any projects this project needs (see "dependencies")
ensure_pull_secrets --all # 2. Ensure pull secrets
create_deployments --all # 3. Deploy Helm charts and manifests specfied as "deployments"
start_dev app # 4. Start dev mode "app" (see "dev" section)
# You can run this pipeline via `devspace deploy` (or `devspace run-pipeline deploy`)
deploy:
run: |-
run_dependencies --all # 1. Deploy any projects this project needs (see "dependencies")
ensure_pull_secrets --all # 2. Ensure pull secrets
build_images --all -t $(git describe --always) # 3. Build, tag (git commit hash) and push all images (see "images")
create_deployments --all # 4. Deploy Helm charts and manifests specfied as "deployments"
# This is a list of `images` that DevSpace can build for this project
# We recommend to skip image building during development (devspace dev) as much as possible
images:
app:
image: my-image-name:tag
dockerfile: ./Dockerfile
# This is a list of `dev` containers that are based on the containers created by your deployments
dev:
app:
# Search for the container that runs this image
labelSelector:
app: API
# Replace the container image with this dev-optimized image (allows to skip image building during development)
restartHelper:
inject: true
devImage: ghcr.io/loft-sh/devspace-containers/go:1.23-alpine
patches:
- op: remove
path: spec.securityContext
# Sync files between the local filesystem and the development container
sync:
- path: ./:/app
excludePaths:
- .git/
- /config/
onUpload:
restartContainer: true
# Run the following command inside the development container:
command:
- go
- run
- main.go
- start
# Open a terminal and use the following command to start it
terminal:
command: ./devspace_start.sh
# Inject a lightweight SSH server into the container (so your IDE can connect to the remote dev env)
ssh:
enabled: true
# Make the following commands from my local machine available inside the dev container
proxyCommands:
- command: devspace
- command: kubectl
- command: helm
- gitCredentials: true
# Forward the following ports to be able access your application via localhost
ports:
- port: "2345"
- port: "3000"
# Open the following URLs once they return an HTTP status code other than 502 or 503
open:
- url: http://localhost:3000/healthz
# Define dependencies to other projects with a devspace.yaml
# dependencies:
# api:
# git: https://... # Git-based dependencies
# tag: v1.0.0
# ui:
# path: ./ui # Path-based dependencies (for monorepos)
Local Environment:
- DevSpace Version: 6.3.14
- Operating System: mac
- ARCH of the OS: ARM64
Kubernetes Cluster:
- Cloud Provider: azure
- Kubernetes Version: v1.30.6
Loading the legacy restart helper instead of the default one seems to solve the issue.
REF: https://github.com/devspace-sh/devspace/blob/4b2d98d73e63626b66b0adf7e8834c7e31d3c19a/pkg/devspace/build/builder/restart/restart.go#L136
You can also configure the legacy restart. For reference see here
The Devspace project currently lacks enough contributors to adequately respond to all issues. After 90d of inactivity, the issue is closed. You can re-open this issue if you still want help.