error: failed to solve: failed to fetch oauth token: authorization server did not include a token in the response
I got the following error when building a simple Dockerfile with buildx, has anyone encountered this error and resolved it?
~/dind-buildx-test/medium # docker buildx build -t registry.example.com/yp/buildx/medium:v1 -f Dockerfile.test .
WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 0.3s (4/4) FINISHED
=> [internal] load build definition from Dockerfile.test 0.0s
=> => transferring dockerfile: 125B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for registry.example.com/yp/centos/centos6u3:pure_base 0.2s
=> [auth] yp/centos/centos6u3:pull token for registry.example.com 0.0s
------
> [internal] load metadata for registry.example.com/yp/centos/centos6u3:pure_base:
------
Dockerfile.test:1
--------------------
1 | >>> FROM registry.example.com/yp/centos/centos6u3:pure_base
2 | RUN echo 'hello buildx'
3 |
--------------------
error: failed to solve: failed to fetch oauth token: authorization server did not include a token in the response
~/dind-buildx-test/medium #
registry.example.com/yp is a public project and I have logged into registry.example.com.
Dockerfile.test:
~/dind-buildx-test/medium # cat Dockerfile.test
FROM registry.example.com/yp/centos/centos6u3:pure_base
RUN echo 'hello buildx'
~/dind-buildx-test/medium #
docker informating:
~/dind-buildx-test/medium # docker info
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.8.2)
Server:
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 5
Server Version: 20.10.15
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16
runc version: v1.1.1-0-g52de29d7
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.0_1-0-0-43
Operating System: Alpine Linux v3.15 (containerized)
OSType: linux
Architecture: x86_64
CPUs: 32
Total Memory: 62.41GiB
Name: dind-buildx-jgnf2
ID: DQ6O:IZK4:U3YA:FXGM:ZAJH:ZDDB:VQZG:UVME:EODG:QBOS:QIZV:V5UR
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/go/attack-surface/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
I also has this issue going from docker cli 20.10.23 to 23.0.0. I reverted the cli but keep the engine at the latest 23.0.0 and that resolved the issue. Not sure if its an cli only or a buildx issue.
I also noticed that with the latest 23.0.0 cli this issue only occurred when i had docker logged in for the registry, logging out meant the docker build was successful.
For me, a simple docker logout <registry url> worked.
wtf?
For me, a simple
docker logout <registry url>worked.
For me as well
useless
sudo docker logout worked for me