Overseerr does not handle IPv6-only environments
Description
I am attempting to run overseerr on a server that does not have an IPv4 address, so all external connections on this machine have to connect to IPv6 addresses. Most services are able to fall back to AAAA records when typical A records fail, but the current release version of overseerr does not fall back to IPv6 at all. This is due to a problem with older versions of node not properly handling IPv4 and IPv6 addresses.
Version
1.30.1
Steps to Reproduce
The docker container starts normally but fails each time it tries to make an outgoing connection. This causes almost all functionality to be lost on the app.
2022-12-04T20:24:13.102Z [info]: Starting Overseerr version 1.30.1
2022-12-04T20:24:13.958Z [info][Notifications]: Registered notification agents
2022-12-04T20:24:13.987Z [info][Jobs]: Scheduled jobs loaded
2022-12-04T20:24:14.100Z [info][Server]: Server ready on port 5055
2022-12-04T20:24:25.485Z [warn][GitHub API]: Failed to retrieve GitHub releases. This may be an issue on GitHub's end. Overseerr can't check if it's on the latest version. {"errorMessage":"connect ENETUNREACH 140.82.121.6:443"}
2022-12-04T20:24:25.521Z [debug][API]: Something went wrong retrieving backdrops {"errorMessage":"[TMDB] Failed to fetch all trending: connect ENETUNREACH 13.225.78.31:443"}
2022-12-04T20:24:26.537Z [warn][GitHub API]: Failed to retrieve GitHub releases. This may be an issue on GitHub's end. Overseerr can't check if it's on the latest version. {"errorMessage":"connect ENETUNREACH 140.82.121.6:443"}
2022-12-04T20:24:30.428Z [warn][GitHub API]: Failed to retrieve GitHub releases. This may be an issue on GitHub's end. Overseerr can't check if it's on the latest version. {"errorMessage":"connect ENETUNREACH 140.82.121.6:443"}
2022-12-04T20:24:33.460Z [error][Plex.tv API]: Something went wrong while getting the account from plex.tv: connect ENETUNREACH 52.31.244.14:443
Entering the container, we can get the correct AAAA address from dig and even connect to these sites with curl. This proves that the container is set up correctly, there is something in the node application that is not handling IPv6 correctly.
justin@kalak:/opt$ docker run -it --entrypoint /bin/sh sctx/overseerr
/app # apk add --update bind-tools -q
/app # dig +short plex.tv AAAA
2a00:1098:2b::1:3430:c26f
2a00:1098:2c::5:3430:c26f
2a00:1098:2c::5:341f:f40e
2a01:4f8:c2c:123f:64:5:3413:e9a8
2a00:1098:2c::5:3648:318c
2a01:4f8:c2c:123f:64:5:341f:f40e
2a00:1098:2c::5:3413:e9a8
2a00:1098:2b::1:3648:318c
2a00:1098:2b::1:341f:f40e
2a01:4f8:c2c:123f:64:5:3430:c26f
2a00:1098:2b::1:3413:e9a8
2a01:4f8:c2c:123f:64:5:3648:318c
/app # curl plex.tv
Moved Permanently/app #
Attempting to build the image from the current Dockerfile also gives an error due to node attempting to fetch packages from IPv4 addresses:
justin@kalak:/opt/build/overseerr$ docker build . -t overseerr:wpdev0
Sending build context to Docker daemon 6.384MB
Step 1/22 : FROM node:16.17-alpine AS BUILD_IMAGE
16.17-alpine: Pulling from library/node
213ec9aee27d: Already exists
bb60732a8e9f: Already exists
9f61bc6ef19c: Already exists
8de0f21617f6: Already exists
Digest: sha256:4d68856f48be7c73cd83ba8af3b6bae98f4679e14d1ff49e164625ae8831533a
Status: Downloaded newer image for node:16.17-alpine
---> f7ef5856dc1f
Step 2/22 : WORKDIR /app
---> Running in 712b904383d0
Removing intermediate container 712b904383d0
---> ccbb28d8c239
Step 3/22 : ARG TARGETPLATFORM
---> Running in 1ab05d712258
Removing intermediate container 1ab05d712258
---> 8249a0d59e7f
Step 4/22 : ENV TARGETPLATFORM=${TARGETPLATFORM:-linux/amd64}
---> Running in 5c94956e5e3d
Removing intermediate container 5c94956e5e3d
---> 05ad1a0c0b19
Step 5/22 : RUN case "${TARGETPLATFORM}" in 'linux/arm64' | 'linux/arm/v7') apk add --no-cache python3 make g++ && ln -s /usr/bin/python3 /usr/bin/python ;; esac
---> Running in cc6c5dfb6be0
Removing intermediate container cc6c5dfb6be0
---> 325a354be875
Step 6/22 : COPY package.json yarn.lock ./
---> 8a87ab809095
Step 7/22 : RUN CYPRESS_INSTALL_BINARY=0 yarn install --frozen-lockfile --network-timeout 1000000
---> Running in 221e58221290
yarn install v1.22.19
[1/4] Resolving packages...
[2/4] Fetching packages...
error An unexpected error occurred: "https://registry.yarnpkg.com/glob/-/glob-7.2.3.tgz: connect ENETUNREACH 104.16.24.35:443".
info If you think this is a bug, please open a bug report with the information provided in "/app/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
The command '/bin/sh -c CYPRESS_INSTALL_BINARY=0 yarn install --frozen-lockfile --network-timeout 1000000' returned a non-zero code: 1
Upgrading the node version in the Dockerfile to node:19-alpine (from 16.17-alpine) builds successfully. This is most likely due to a change in how node works with IPv6 addresses in node 17. I imagine there is a reason why node is still on this version and I don't want to just submit a pull request bumping the version without knowledge of what it would affect. However, the current version is causing problems for the increasing number of users without IPv4 addresses.
As a note, I am using nat64.net to provide IPv6 addresses for such sites that do not have IPv6 set up yet. This service has been working for every other application on my network, though I am happy to try a different one if it would be helpful for troubleshooting.
Screenshots
No response
Logs
No response
Platform
desktop
Device
Docker version 20.10.21, build baeda1f
Operating System
Debian GNU/Linux 11
Browser
N/A
Additional Context
No response
Code of Conduct
- [X] I agree to follow Overseerr's Code of Conduct
I actually have a similar issue, but not on a IPv6-only environment but container has IPv6 and IPv4 addresses.
2023-01-14T04:18:44.475Z [error][Plex.tv API]: Something went wrong while getting the account from plex.tv: getaddrinfo ENOTFOUND plex.tv despite it being resolveable via dig in the container.
The current version of node being used (16.17) was the latest LTS version at the time and is due for a bump to 18.12 which has been promoted to LTS around the end of last year.
I also see what looks like the same behaviour in Kubernetes. My overseerr pod has both ipv4 and ipv6 addresses, I can resolve fine within the container, but the app isn't able to resolve anything:
2023-03-22T23:43:37.671Z [warn][GitHub API]: Failed to retrieve GitHub releases. This may be an issue on GitHub's end. Overseerr can't check if it's on the latest version. {"errorMessage":"getaddrinfo ENOTFOUND api.github.com"}
2023-03-22T23:44:47.647Z [error][Plex.tv API]: Something went wrong while getting the account from plex.tv: getaddrinfo ENOTFOUND plex.tv
For anyone running into this, I worked around it by disabling ipv6 in this pod with the following securityContext:
securityContext:
sysctls:
- name: net.ipv6.conf.all.disable_ipv6
value: "1"
You will need to allow this unsafe sysctl in your kubelet args
For anyone running into this, I worked around it by disabling ipv6
I'm sorry but you shouldn't disable IPv6. I also don't think that that securityContext will only affect that said pod but potentially others on the same host as well.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This is still an issue
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This is a bug, which probably should be fixed.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Commenting for the bot
I have the same issue. If upgrading the node version is the only step needed, I don't really see what is stopping this issue from being resolved. It seems like the current version is 18 but the ipv6 is fixed starting v19
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
The issue is still very much relevant.