`configure_docker_dns: true` breaks everything on blacksmith.sh runners
Hi,
My workflow has some Docker commands that fail after the warp step is executed, so I tried to enable configure_docker_dns to true and that breaks the whole thing.
/home/runner/_work/_actions/Boostport/setup-cloudflare-warp/v1.14.0/dist/index.js:30726
if (err.code === "ENOENT") {
^
TypeError: Cannot read properties of null (reading 'code')
at /home/runner/_work/_actions/Boostport/setup-cloudflare-warp/v1.14.0/dist/index.js:30726:13
at FSReqCallback.oncomplete (node:fs:199:5)
Can you try with v1.8.0? This was the version before the configure_docker_dns option was introduced.
I just tried using it in a test repo and it worked fine. Are you using Github's hosted runners or your own runners?
Here's my output:
Run Boostport/setup-cloudflare-warp@v1
with:
organization: ***
auth_client_id: ***
auth_client_secret: ***
configure_docker_dns: true
/bin/bash -c echo DNSStubListenerExtra=172.17.0.1 | sudo tee -a /etc/systemd/resolved.conf
/bin/bash -c echo '{}' | sudo tee /etc/docker/daemon.json
DNSStubListenerExtra=172.17.0.1
{}
/bin/bash -c cat /etc/docker/daemon.json | jq '.dns=["172.17.0.1"]' | sudo tee /etc/docker/daemon.json
{
"dns": [
"172.17.0.1"
]
}
/usr/bin/sudo systemctl restart systemd-resolved
/usr/bin/sudo systemctl restart docker
/usr/bin/sudo mkdir -p /var/lib/cloudflare-warp/
/usr/bin/sudo mv /tmp/mdm.xml /var/lib/cloudflare-warp/
/usr/bin/which warp-cli
warp-cli not found, proceeding with installation
/bin/bash -c cat /home/runner/work/_temp/5936e559-4d00-49e7-80d6-57f5ae484a14 | sudo gpg --yes --dearmor --output /usr/share/keyrings/cloudflare-warp-archive-keyring.gpg
/bin/bash -c echo "deb [signed-by=/usr/share/keyrings/cloudflare-warp-archive-keyring.gpg] https://pkg.cloudflareclient.com/ $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/cloudflare-client.list
deb [signed-by=/usr/share/keyrings/cloudflare-warp-archive-keyring.gpg] https://pkg.cloudflareclient.com/ noble main
/usr/bin/sudo apt update
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
This is my workflow:
on:
push:
branches:
- main
name: test
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Connect to Cloudflare WARP
uses: Boostport/setup-cloudflare-warp@v1
with:
organization: ${{ secrets.CLOUDFLARE_ACCESS_ORGANIZATION }}
auth_client_id: ${{ secrets.CLOUDFLARE_ACCESS_CLIENT_ID }}
auth_client_secret: ${{ secrets.CLOUDFLARE_ACCESS_CLIENT_SECRET }}
configure_docker_dns: true
- name: Checkout
uses: actions/checkout@v4
- name: Import Secrets
uses: enflo/curl-action@master
with:
curl: -k SOME_PRIVATE_ADDRESS
I just released v1.15.0 which migrates all usage of fs to the promise version. Perhaps that will fix your issue.
Hi, version v1.15.0 is not breaking anymore, but DNS resolution inside a docker container also doesn't work.
My runner is a blacksmith.sh machine.
Ah that must be the problem. I am guessing blacksmit.sh doesn't use identical OS images as the GitHub actions runners. The bits that set up dns resolution from within docker are here: https://github.com/Boostport/setup-cloudflare-warp/blob/main/lib/setup-cloudflare-warp.js#L185
I would suggest forking the repo and adding some debug statements and running them on your blacksmith.sh runner to see what's failing. Once that's done, we can potentially add a few more configurations to allow customization of file paths.
I've inspected /etc/resolv.conf and the content is pointing to 127.0.0.53.
Then I ran resolvectl status and the output is:
Global
Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Fallback DNS Servers: 8.8.8.8 8.8.4.4 1.1.1.1 1.0.0.1 9.9.9.9 149.112.112.112
Link 2 (dummy0)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 3 (eth0)
Current Scopes: DNS
Protocols: +DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 8.8.8.8
DNS Servers: 8.8.8.8 1.1.1.1
DNS Domain: ~.
Link 4 (sit0)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 5 (ip6tnl0)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 6 (ip6gre0)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 7 (tailscale0)
Current Scopes: none
Protocols: +DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 8 (docker0)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 9 (bsdummy6)
Current Scopes: none
Protocols: -DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
It looks pretty ok to me, I really can't understand why docker doesn't work. BUT, if I stop WARP client, docker start again to resolve the names.
Can you post the contents of /etc/docker/daemon.json and /etc/systemd/resolved.conf?
Can you also add another step before this action and check to see if warp-cli is already installed on the machine?
warp-cli is not installed, so the action installs it every run
warp-cli not found, proceeding with installation
/etc/docker/daemon.json
{
"registry-mirrors": [
"http://192.168.127.1:5000/"
],
"insecure-registries": [
"192.168.127.1:5000"
],
"dns": [
"172.17.0.1"
]
}
/etc/systemd/resolved.conf
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it under the
# terms of the GNU Lesser General Public License as published by the Free
# Software Foundation; either version 2.1 of the License, or (at your option)
# any later version.
#
# Entries in this file show the compile time defaults. Local configuration
# should be created by either modifying this file (or a copy of it placed in
# /etc/ if the original file is shipped in /usr/), or by creating "drop-ins" in
# the /etc/systemd/resolved.conf.d/ directory. The latter is generally
# recommended. Defaults can be restored by simply deleting the main
# configuration file and all drop-ins located in /etc/.
#
# Use 'systemd-analyze cat-config systemd/resolved.conf' to display the full config.
#
# See resolved.conf(5) for details.
[Resolve]
# Some examples of DNS servers which may be used for DNS= and FallbackDNS=:
# Cloudflare: 1.1.1.1#cloudflare-dns.com 1.0.0.1#cloudflare-dns.com 2606:4700:4700::1111#cloudflare-dns.com 2606:4700:4700::1001#cloudflare-dns.com
# Google: 8.8.8.8#dns.google 8.8.4.4#dns.google 2001:4860:4860::8888#dns.google 2001:4860:4860::8844#dns.google
# Quad9: 9.9.9.9#dns.quad9.net 149.112.112.112#dns.quad9.net 2620:fe::fe#dns.quad9.net 2620:fe::9#dns.quad9.net
#DNS=
#FallbackDNS=
#Domains=
#DNSSEC=no
#DNSOverTLS=no
#MulticastDNS=no
#LLMNR=no
#Cache=no-negative
#CacheFromLocalhost=no
#DNSStubListener=yes
#DNSStubListenerExtra=
#ReadEtcHosts=yes
#ResolveUnicastSingleLabel=no
#StaleRetentionSec=0
DNSStubListenerExtra=172.17.0.1
These are the results on GH actions:
{
"dns": [
"172.17.0.1"
]
}
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it under the
# terms of the GNU Lesser General Public License as published by the Free
# Software Foundation; either version 2.1 of the License, or (at your option)
# any later version.
#
# Entries in this file show the compile time defaults. Local configuration
# should be created by either modifying this file (or a copy of it placed in
# /etc/ if the original file is shipped in /usr/), or by creating "drop-ins" in
# the /etc/systemd/resolved.conf.d/ directory. The latter is generally
# recommended. Defaults can be restored by simply deleting the main
# configuration file and all drop-ins located in /etc/.
#
# Use 'systemd-analyze cat-config systemd/resolved.conf' to display the full config.
#
# See resolved.conf(5) for details.
[Resolve]
# Some examples of DNS servers which may be used for DNS= and FallbackDNS=:
# Cloudflare: 1.1.1.1#cloudflare-dns.com 1.0.0.1#cloudflare-dns.com 2606:4700:4700::1111#cloudflare-dns.com 2606:4700:4700::1001#cloudflare-dns.com
# Google: 8.8.8.8#dns.google 8.8.4.4#dns.google 2001:4860:4860::8888#dns.google 2001:4860:4860::8844#dns.google
# Quad9: 9.9.9.9#dns.quad9.net 149.112.112.112#dns.quad9.net 2620:fe::fe#dns.quad9.net 2620:fe::9#dns.quad9.net
#DNS=
#FallbackDNS=
#Domains=
#DNSSEC=no
#DNSOverTLS=no
#MulticastDNS=no
#LLMNR=no
#Cache=no-negative
#CacheFromLocalhost=no
#DNSStubListener=yes
#DNSStubListenerExtra=
#ReadEtcHosts=yes
#ResolveUnicastSingleLabel=no
#StaleRetentionSec=0
DNSStubListenerExtra=172.17.0.1
Can you check if the docker daemon hands out ip addresses in the 172.17.x.x range?
Looks like yes. docker network inspect bridge
[
{
"Name": "bridge",
"Id": "4cadc9711ed114fd5063f09efb0b75d090554248cc0624f83a03ded3421b46b8",
"Created": "2025-08-27T09:48:40.89806029Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Maybe blacksmith.sh has something different in their networking that's different from GH actions. Maybe add a separate step after the setup-cloudflare-warp action to restart the warp-cli as a workaround for now.
If you find a solution, I am happy to accept a PR for it.