az login device login
Describe the bug
- use the docker image mcr.microsoft.com/azure-cli on an virtual Box.
- use docker exec -it
sh to enter the container - az login
this initiates a device login access
- the network connection of VM disappear.
stopping the container bring the connection back.
A ping to google.com while executing the above steps shows it:
PING google.com (x.y.z.q) 56(84) bytes of data.
64 bytes from ws-in-f139.1e100.net (x.y.z.q): icmp_seq=1 ttl=57 time=15.6 ms
64 bytes from ws-in-f139.1e100.net (x.y.z.q): icmp_seq=2 ttl=57 time=18.8 ms
64 bytes from ws-in-f139.1e100.net (x.y.z.q): icmp_seq=3 ttl=57 time=14.1 ms
64 bytes from ws-in-f139.1e100.net (x.y.z.q): icmp_seq=4 ttl=57 time=16.8 ms
64 bytes from ws-in-f139.1e100.net (x.y.z.q): icmp_seq=5 ttl=57 time=16.2 ms
64 bytes from ws-in-f139.1e100.net (x.y.z.q): icmp_seq=6 ttl=57 time=14.6 ms
64 bytes from ws-in-f139.1e100.net (x.y.z.q): icmp_seq=7 ttl=57 time=14.2 ms
64 bytes from ws-in-f139.1e100.net (x.y.z.q): icmp_seq=8 ttl=57 time=17.0 ms
64 bytes from ws-in-f139.1e100.net (x.y.z.q): icmp_seq=9 ttl=57 time=16.0 ms
64 bytes from ws-in-f139.1e100.net (x.y.z.q): icmp_seq=10 ttl=57 time=20.2 ms
64 bytes from ws-in-f139.1e100.net (x.y.z.q): icmp_seq=11 ttl=57 time=14.0 ms
From oslo.local (a.b.c.d) icmp_seq=12 Destination Host Unreachable
From oslo.local (a.b.c.d) icmp_seq=13 Destination Host Unreachable
Related command
az login
Errors
network connection is lost. Therefore the docker azcli will not work.
Issue script & Debug output
.
Expected behavior
keep the connection
Environment Summary
.
Additional context
.
Thank you for opening this issue, we will look into it.
az login is taking a lot of time to open the browser. I am not sure if this related to this issue ?
no it is not related. It is the case that if you use az login creates a device login request 'https://microsoft.com/devicelogin" and that this affects the network routing on my local machine after successful login.
But I do not understand what is changed, nor the purpose. And as a side effect it kills my internet connect in my docker container as all of a sudden the traffic is routed to a dead end.
Additional info: It turns out that the docker container is creating an additional ethernet interface (nr 27 below). The situation below is the case just when starting the docker container. At that moment all is working fine
oslo@oslo:~$ sudo netplan status --all
Online state: online
DNS Addresses: 127.0.0.53 (stub)
DNS Search: .
● 1: lo ethernet UNKNOWN/UP (unmanaged)
MAC Address: 00:00:00:00:00:00
Addresses: 127.0.0.1/8
::1/128
Routes: ::1 metric 256
● 2: enp0s3 ethernet UP (networkd: enp0s3)
MAC Address: 02:19:cb:da:0d:b3 (Intel Corporation)
Addresses: 10.0.2.15/24 (dhcp)
fe80::19:cbff:feda:db3/64 (link)
DNS Addresses: 10.0.2.3
Routes: default via 10.0.2.2 (boot)
default via 10.0.2.2 from 10.0.2.15 metric 100 (dhcp)
10.0.2.0/24 from 10.0.2.15 (link)
10.0.2.2 (boot, link)
10.0.2.2 from 10.0.2.15 metric 100 (dhcp, link)
10.0.2.3 from 10.0.2.15 metric 100 (dhcp, link)
fe80::/64 metric 256
● 3: docker0 bridge UP (unmanaged)
MAC Address: 02:42:36:b4:4f:a2
Addresses: 172.17.0.1/16
fe80::42:36ff:feb4:4fa2/64 (link)
Routes: 172.17.0.0/16 from 172.17.0.1 (link)
fe80::/64 metric 256
● 27: veth1b9b0b2 ethernet UP (unmanaged)
MAC Address: 2e:0a:a4:74:2c:66
Addresses: fe80::2c0a:a4ff:fe74:2c66/64 (link)
Routes: fe80::/64 metric 256
however after a short while this last is turned into
● 27: veth1b9b0b2 ethernet UP (unmanaged)
MAC Address: 2e:0a:a4:74:2c:66
Addresses: 169.254.48.194/16 (link)
fe80::2c0a:a4ff:fe74:2c66/64 (link)
Routes: 0.0.0.0 (boot, link)
default (boot, link)
169.254.0.0/16 from 169.254.48.194 (link)
fe80::/64 metric 256
causing all network traffic to break. I do not understand why this is happening and what can be done to resolve it. It used to work before dec 2023.
I found out that in this case the docker container is creating a bridged network in my VirtualBox Ubuntu 22.04. When starting the Azure-cli container this creates a new ethernet interface which takes control of the internet connection. This affects the host environment also: als all traffic gets redirected through this new ethernet interface. And this interface is not connected with the orginal hosts network.
As (temporary) solution I added --network host to the docker container starting parameters, and in that case no new network is created and the situation does not occur.
So there seems to be a specific networking conditions with the azure-cli container that do not happen with others. I run all the time containers on the same Virtual Box and only this one causes the trouble.