unable to upgrade connection: Forbidden
devspace dev` fails with unable to upgrade connection: Forbidden
start_dev: start sync: exec: unable to upgrade connection: Forbidden
fatal exit status
For some reason, it does not seem to be able to create the port forward. However kubectl can connect to the cluster just fine.
Does any ServiceAccount need to be setup for this? If so, could you kindly provide an example for the permissions needed?
Debug output
#devspace dev --debug
19:34:34 info Using namespace 'default'
19:34:34 info Using kube context 'microk8s'
19:34:34 debug Use config:
version: v2beta1
name: app
pipelines:
deploy:
name: deploy
run: |-
run_dependencies --all # 1. Deploy any projects this project needs (see "dependencies")
build_images --all -t $(git describe --always) # 2. Build, tag (git commit hash) and push all images (see "images")
create_deployments --all # 3. Deploy Helm charts and manifests specfied as "deployments"
dev:
name: dev
run: |-
run_dependencies --all # 1. Deploy any projects this project needs (see "dependencies")
create_deployments --all # 2. Deploy Helm charts and manifests specfied as "deployments"
start_dev app # 3. Start dev mode "app" (see "dev" section)
images:
app:
name: app
image: zitabel/app
tags:
- v0.0.1
dockerfile: ./Dockerfile
deployments:
app:
name: app
helm:
chart:
name: component-chart
repo: https://charts.devspace.sh
values:
containers:
- image: zitabel/app:v0.0.1
service:
ports:
- port: 8080
dev:
app:
name: app
imageSelector: zitabel/app:v0.0.1
devImage: ghcr.io/loft-sh/devspace-containers/javascript:17-alpine
sync:
- path: ./
terminal:
command: ./devspace_start.sh
ssh:
enabled: true
proxyCommands:
- command: devspace
- command: kubectl
- command: helm
- command: git
ports:
- port: "9229"
- port: "8080"
open:
- url: http://localhost:8080
commands:
migrate-db:
name: migrate-db
command: |-
echo 'This is a cross-platform, shared command that can be used to codify any kind of dev task.'
echo 'Anyone using this project can invoke it via "devspace run migrate-db"'
19:34:34 debug Run pipeline:
name: dev
run: |-
run_dependencies --all # 1. Deploy any projects this project needs (see "dependencies")
create_deployments --all # 2. Deploy Helm charts and manifests specfied as "deployments"
start_dev app # 3. Start dev mode "app" (see "dev" section)
19:34:34 run_dependencies --all
19:34:34 Marked project excluded: app
19:34:34 create_deployments --all
19:34:34 Deploying 1 deployments concurrently...
19:34:34 deploy:app Deploying chart /home/elias/.devspace/component-chart/component-chart-0.8.4.tgz (app) with helm...
19:34:34 deploy:app Deploying chart with values:
containers:
- image: zitabel/app:v0.0.1
service:
ports:
- port: 8080
19:34:34 deploy:app Execute '/home/elias/.devspace/bin/helm upgrade app --values /tmp/3714250176 --install --namespace default /home/elias/.devspace/component-chart/component-chart-0.8.4.tgz --kube-context microk8s'
19:34:35 deploy:app Deployed helm chart (Release revision: 1)
19:34:35 deploy:app Successfully deployed app with helm
19:34:35 Deploying 0 deployments concurrently
19:34:35 start_dev app
19:34:35 dev:app DevPod Config:
name: app
imageSelector: zitabel/app:v0.0.1
devImage: ghcr.io/loft-sh/devspace-containers/javascript:17-alpine
sync:
- path: ./
terminal:
command: ./devspace_start.sh
ssh:
enabled: true
proxyCommands:
- command: devspace
- command: kubectl
- command: helm
- command: git
ports:
- port: "9229"
- port: "8080"
open:
- url: http://localhost:8080
19:34:36 dev:app Replacing Deployment app...
19:34:36 dev:app Replaced pod spec:
metadata:
annotations:
devspace.sh/container: container-0
devspace.sh/imageSelector: zitabel/app:v0.0.1
helm.sh/chart: component-chart-0.8.4
creationTimestamp: null
labels:
app.kubernetes.io/component: app
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: devspace-app
devspace.sh/replaced: "true"
spec:
containers:
- command:
- sleep
- "1000000000"
image: ghcr.io/loft-sh/devspace-containers/javascript:17-alpine
imagePullPolicy: IfNotPresent
name: container-0
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 5
19:34:36 dev:app Scaled down Deployment app
19:34:36 dev:app Successfully replaced Deployment app
19:34:36 dev:app Waiting for pod to become ready...
19:34:36 dev:app Start selecting a single container with selector image selector: zitabel/app:v0.0.1
19:34:38 dev:app Selected app-devspace-76b64468-qvd7q:container-0 (pod:container)
19:34:38 dev:app open Opening 'http://localhost:8080' as soon as application will be started
19:34:38 dev:app ports Start selecting a single pod with selector pod name: app-devspace-76b64468-qvd7q
19:34:38 dev:app sync Start selecting a single container with selector pod name: app-devspace-76b64468-qvd7q
19:34:39 dev:app sync Starting sync...
19:34:39 dev:app sync Inject devspacehelper...
19:34:39 dev:app ports Port forwarding started on: 9229 -> 9229, 8080 -> 8080
19:34:39 dev:app term Start selecting a single container with selector pod name: app-devspace-76b64468-qvd7q
19:34:39 dev:app ports Stopped port forwarding 9229
19:34:39 dev:app ports Stopped port forwarding 8080
19:34:39 dev:app Stopped dev app
19:34:39 start_dev: start sync: exec: unable to upgrade connection: Forbidden
19:34:39 fatal exit status 1
Local Environment:
- DevSpace Version: 6.0.1
- Operating System: linux
- ARCH of the OS: AMD64
Kubernetes Cluster:
- Cloud Provider: ubuntu microk8s installation
- Kubernetes Client Version: v1.24.3-2+63243a96d1c393
- Kubernetes Server Version: v1.24.3-2+63243a96d1c393
- Kustomize Version: v4.5.4
/kind bug
Does not work for examples/quickstart-kubectl
#devspace dev --debug
20:11:39 info Using namespace 'default'
20:11:39 info Using kube context 'microk8s'
20:11:39 debug Use config:
version: v2beta1
name: quickstart-kubectl
deployments:
quickstart:
name: quickstart
kubectl:
manifests:
- kube
dev:
my-dev:
name: my-dev
imageSelector: zitabel/quickstart
devImage: loftsh/javascript:latest
sync:
- path: ./
excludePaths:
- node_modules
terminal:
command: ./devspace_start.sh
ssh: {}
ports:
- port: "3000"
open:
- url: http://localhost:3000
20:11:39 debug Run pipeline:
name: dev
run: |-
run_dependencies --all
ensure_pull_secrets --all
build_images --all
create_deployments --all
start_dev --all
20:11:39 run_dependencies --all
20:11:39 Marked project excluded: quickstart-kubectl
20:11:39 ensure_pull_secrets --all
20:11:39 build_images --all
20:11:39 create_deployments --all
20:11:39 Deploying 1 deployments concurrently...
20:11:39 deploy:quickstart Applying manifests with kubectl...
20:11:40 deploy:quickstart deployment.apps/devspace configured
20:11:40 deploy:quickstart service/external unchanged
20:11:40 deploy:quickstart Successfully deployed quickstart with kubectl
20:11:40 Deploying 0 deployments concurrently
20:11:40 start_dev --all
20:11:40 dev:my-dev DevPod Config:
name: my-dev
imageSelector: zitabel/quickstart
devImage: loftsh/javascript:latest
sync:
- path: ./
excludePaths:
- node_modules
terminal:
command: ./devspace_start.sh
ssh: {}
ports:
- port: "3000"
open:
- url: http://localhost:3000
20:11:40 dev:my-dev Try to find replaced deployment...
20:11:40 dev:my-dev Replaced pod spec:
metadata:
annotations:
devspace.sh/container: default
devspace.sh/imageSelector: zitabel/quickstart
creationTimestamp: null
labels:
app.kubernetes.io/component: default
app.kubernetes.io/name: devspace-app
devspace.sh/replaced: "true"
spec:
containers:
- command:
- sleep
- "1000000000"
image: loftsh/javascript:latest
imagePullPolicy: Always
name: default
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
20:11:40 dev:my-dev No changes required in replaced deployment devspace-devspace
20:11:40 dev:my-dev Waiting for pod to become ready...
20:11:40 dev:my-dev Start selecting a single container with selector image selector: zitabel/quickstart
20:11:40 dev:my-dev Selected devspace-devspace-5c5dd5b78f-bcsn9:default (pod:container)
20:11:40 dev:my-dev open Opening 'http://localhost:3000' as soon as application will be started
20:11:40 dev:my-dev ports Start selecting a single pod with selector pod name: devspace-devspace-5c5dd5b78f-bcsn9
20:11:40 dev:my-dev sync Start selecting a single container with selector pod name: devspace-devspace-5c5dd5b78f-bcsn9
20:11:41 dev:my-dev sync Starting sync...
20:11:41 dev:my-dev sync Inject devspacehelper...
20:11:41 dev:my-dev ports Port forwarding started on: 3000 -> 3000
20:11:41 dev:my-dev term Start selecting a single container with selector pod name: devspace-devspace-5c5dd5b78f-bcsn9
20:11:41 dev:my-dev ports Stopped port forwarding 3000
20:11:41 dev:my-dev Stopped dev my-dev
20:11:41 start_dev: start sync: exec: unable to upgrade connection: Forbidden
20:11:41 fatal exit status 1
elias@jurij:~/code/app.steadiq.com/devspace/examples/quickstart-kubectl$ devspace purge
info Using namespace 'default'
info Using kube context 'microk8s'
dev:my-dev Stopping dev my-dev
dev:my-dev Scaling up Deployment devspace...
purge:quickstart Deleting deployment quickstart...
purge:quickstart Successfully deleted deployment quickstart
elias@jurij:~/code/app.steadiq.com/devspace/examples/quickstart-kubectl$
elias@jurij:~/code/app.steadiq.com/devspace/examples/quickstart-kubectl$
elias@jurij:~/code/app.steadiq.com/devspace/examples/quickstart-kubectl$
elias@jurij:~/code/app.steadiq.com/devspace/examples/quickstart-kubectl$
elias@jurij:~/code/app.steadiq.com/devspace/examples/quickstart-kubectl$
elias@jurij:~/code/app.steadiq.com/devspace/examples/quickstart-kubectl$
elias@jurij:~/code/app.steadiq.com/devspace/examples/quickstart-kubectl$
elias@jurij:~/code/app.steadiq.com/devspace/examples/quickstart-kubectl$
elias@jurij:~/code/app.steadiq.com/devspace/examples/quickstart-kubectl$
elias@jurij:~/code/app.steadiq.com/devspace/examples/quickstart-kubectl$
elias@jurij:~/code/app.steadiq.com/devspace/examples/quickstart-kubectl$ devspace dev --debug
20:11:53 info Using namespace 'default'
20:11:53 info Using kube context 'microk8s'
20:11:53 debug Use config:
version: v2beta1
name: quickstart-kubectl
deployments:
quickstart:
name: quickstart
kubectl:
manifests:
- kube
dev:
my-dev:
name: my-dev
imageSelector: zitabel/quickstart
devImage: loftsh/javascript:latest
sync:
- path: ./
excludePaths:
- node_modules
terminal:
command: ./devspace_start.sh
ssh: {}
ports:
- port: "3000"
open:
- url: http://localhost:3000
20:11:53 debug Run pipeline:
name: dev
run: |-
run_dependencies --all
ensure_pull_secrets --all
build_images --all
create_deployments --all
start_dev --all
20:11:53 run_dependencies --all
20:11:53 Marked project excluded: quickstart-kubectl
20:11:53 ensure_pull_secrets --all
20:11:53 build_images --all
20:11:53 create_deployments --all
20:11:53 Deploying 1 deployments concurrently...
20:11:53 deploy:quickstart Applying manifests with kubectl...
20:11:53 deploy:quickstart deployment.apps/devspace created
20:11:53 deploy:quickstart service/external created
20:11:53 deploy:quickstart Successfully deployed quickstart with kubectl
20:11:53 Deploying 0 deployments concurrently
20:11:53 start_dev --all
20:11:53 dev:my-dev DevPod Config:
name: my-dev
imageSelector: zitabel/quickstart
devImage: loftsh/javascript:latest
sync:
- path: ./
excludePaths:
- node_modules
terminal:
command: ./devspace_start.sh
ssh: {}
ports:
- port: "3000"
open:
- url: http://localhost:3000
20:11:53 dev:my-dev Replacing Deployment devspace...
20:11:53 dev:my-dev Replaced pod spec:
metadata:
annotations:
devspace.sh/container: default
devspace.sh/imageSelector: zitabel/quickstart
creationTimestamp: null
labels:
app.kubernetes.io/component: default
app.kubernetes.io/name: devspace-app
devspace.sh/replaced: "true"
spec:
containers:
- command:
- sleep
- "1000000000"
image: loftsh/javascript:latest
imagePullPolicy: Always
name: default
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
20:11:54 dev:my-dev Scaled down Deployment devspace
20:11:54 dev:my-dev Successfully replaced Deployment devspace
20:11:54 dev:my-dev Waiting for pod to become ready...
20:11:54 dev:my-dev Start selecting a single container with selector image selector: zitabel/quickstart
20:12:01 dev:my-dev Selected devspace-devspace-5c5dd5b78f-h99lt:default (pod:container)
20:12:01 dev:my-dev open Opening 'http://localhost:3000' as soon as application will be started
20:12:01 dev:my-dev ports Start selecting a single pod with selector pod name: devspace-devspace-5c5dd5b78f-h99lt
20:12:01 dev:my-dev sync Start selecting a single container with selector pod name: devspace-devspace-5c5dd5b78f-h99lt
20:12:01 dev:my-dev sync Starting sync...
20:12:01 dev:my-dev sync Inject devspacehelper...
20:12:01 dev:my-dev ports Port forwarding started on: 3000 -> 3000
20:12:02 dev:my-dev term Start selecting a single container with selector pod name: devspace-devspace-5c5dd5b78f-h99lt
20:12:02 dev:my-dev ports Stopped port forwarding 3000
20:12:02 dev:my-dev Stopped dev my-dev
20:12:02 start_dev: start sync: exec: unable to upgrade connection: Forbidden
20:12:02 fatal exit status 1
examples/quickstart-kubectl/.devspace/logs/dev.dev:my-dev.log
{"level":"info","msg":"dev:my-dev Waiting for pod to become ready...","time":"2022-08-28T20:03:01+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:03:12+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:03:22+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:03:32+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:03:43+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:03:54+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:04:04+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:04:14+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:04:25+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:04:35+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:04:46+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:04:56+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:05:07+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:05:17+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:05:28+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:05:38+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-6cb8bfcff5-gdhtr has status: ContainerCreating","time":"2022-08-28T20:05:48+02:00"}
{"level":"info","msg":"dev:my-dev Waiting for pod to become ready...","time":"2022-08-28T20:06:40+02:00"}
{"level":"warning","msg":"dev:my-dev DevSpace is waiting, because Pod devspace-devspace-5c5dd5b78f-bcsn9 has status: ContainerCreating","time":"2022-08-28T20:06:51+02:00"}
{"level":"info","msg":"dev:my-dev Selected devspace-devspace-5c5dd5b78f-bcsn9:default (pod:container)","time":"2022-08-28T20:07:00+02:00"}
{"level":"info","msg":"dev:my-dev open Opening 'http://localhost:3000' as soon as application will be started","time":"2022-08-28T20:07:00+02:00"}
{"level":"info","msg":"dev:my-dev sync Inject devspacehelper...","time":"2022-08-28T20:07:01+02:00"}
{"level":"info","msg":"dev:my-dev ports Port forwarding started on: 3000 -\u003e 3000","time":"2022-08-28T20:07:01+02:00"}
{"level":"info","msg":"dev:my-dev Waiting for pod to become ready...","time":"2022-08-28T20:11:40+02:00"}
{"level":"info","msg":"dev:my-dev Selected devspace-devspace-5c5dd5b78f-bcsn9:default (pod:container)","time":"2022-08-28T20:11:40+02:00"}
{"level":"info","msg":"dev:my-dev open Opening 'http://localhost:3000' as soon as application will be started","time":"2022-08-28T20:11:40+02:00"}
{"level":"info","msg":"dev:my-dev sync Inject devspacehelper...","time":"2022-08-28T20:11:41+02:00"}
{"level":"info","msg":"dev:my-dev ports Port forwarding started on: 3000 -\u003e 3000","time":"2022-08-28T20:11:41+02:00"}
{"level":"info","msg":"dev:my-dev Waiting for pod to become ready...","time":"2022-08-28T20:11:54+02:00"}
{"level":"info","msg":"dev:my-dev Selected devspace-devspace-5c5dd5b78f-h99lt:default (pod:container)","time":"2022-08-28T20:12:01+02:00"}
{"level":"info","msg":"dev:my-dev open Opening 'http://localhost:3000' as soon as application will be started","time":"2022-08-28T20:12:01+02:00"}
{"level":"info","msg":"dev:my-dev sync Inject devspacehelper...","time":"2022-08-28T20:12:01+02:00"}
{"level":"info","msg":"dev:my-dev ports Port forwarding started on: 3000 -\u003e 3000","time":"2022-08-28T20:12:01+02:00"}
examples/quickstart-kubectl/.devspace/logs/errors.log
{"level":"error","msg":"Runtime error occurred: error closing listener: close tcp4 127.0.0.1:3000: use of closed network connection","time":"2022-08-28T20:07:01+02:00"}
{"level":"error","msg":"Runtime error occurred: error closing listener: close tcp6 [::1]:3000: use of closed network connection","time":"2022-08-28T20:07:01+02:00"}
{"level":"error","msg":"Runtime error occurred: error closing listener: close tcp4 127.0.0.1:3000: use of closed network connection","time":"2022-08-28T20:11:41+02:00"}
{"level":"error","msg":"Runtime error occurred: error closing listener: close tcp6 [::1]:3000: use of closed network connection","time":"2022-08-28T20:11:41+02:00"}
{"level":"error","msg":"Runtime error occurred: error closing listener: close tcp4 127.0.0.1:3000: use of closed network connection","time":"2022-08-28T20:12:02+02:00"}
{"level":"error","msg":"Runtime error occurred: error closing listener: close tcp6 [::1]:3000: use of closed network connection","time":"2022-08-28T20:12:02+02:00"}
Hi @trickkiste, I think you'll need to give RBAC permissions for a pod, pod/exec and pod/portforward.
Hello, @trickkiste! Thanks a lot for reporting this issue!
Has @pratikjagrut's answer provided you with the answer you were looking for? We'd really appreciate it if you could update us on the present issue.
Thanks!
@alexandradragodan well, initially I did not even have RBAC enabled, when I ran into that issue. Now I have and put some RBAC permissions in place and it works now. However, I can not really narrow down the real cause of the issue, as other parts of my initial setup might have interfered negatively. First time getting a k8s cluster up and running and especially the user/permissions part is documented rather purely in Kubernetes.
Thank you all for your help and concern. I hope, next time I can provide a more detailed bug report.
Glad you managed to get it working & Congratulations on running your first k8s cluster!
I will then close the present issue as it is no longer occurring. For any further questions, we're happy to support.