foundryvtt-docker icon indicating copy to clipboard operation
foundryvtt-docker copied to clipboard

EAI_AGAIN error on microk8s

Open BitRacer opened this issue 4 years ago • 3 comments

🐛 Bug Report

I am unable to cleanly start the :stable version of the container on my 4 node microk8s cluster(arm64v8)

To Reproduce

Steps to reproduce the behavior:

  • create config map
---
apiVersion: v1
kind: Secret
data:
  FOUNDRY_ADMIN_KEY: <base64 encoded key here>
  FOUNDRY_PASSWORD: <base64 encoded password here>
metadata:
  name: foundry-secrets
  namespace: foundry
  • create deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: foundry
  namespace: foundry
  labels:
    app: foundry
spec:
  selector:
    matchLabels:
      app: foundry
  replicas: 1
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: foundry
    spec:
      volumes:
        - name: data
          emptyDir: {}
      containers:
        - name: foundry
          image: 'felddy/foundryvtt:release'
          env:
          - name: FOUNDRY_USERNAME
            value: "<insert unsername here>"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 30000
              protocol: TCP
          envFrom:
            - secretRef:
                name: foundry-secrets
          volumeMounts:
            - name: data
              mountPath: /data
          resources:
            requests:
              memory: 4G
              cpu: '2'
            limits:
              memory: 4G
              cpu: '2'

Expected behavior

expecting the container to start up cleanly. What I find odd here is the EAI_AGAIN error seems to be related to DNS, however the container does appear to successfully do a lookup when I exec into it

log output

~/foundry$ k apply -f ./foundry.yaml
namespace/foundry unchanged
secret/foundry-secrets unchanged
deployment.apps/foundry created
~/foundry$ k get all
NAME                           READY   STATUS    RESTARTS   AGE
pod/foundry-546d7c966c-pspn8   1/1     Running   0          4s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/foundry   1/1     1            1           4s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/foundry-546d7c966c   1         1         1       4s
~/foundry$ k exec deployment.apps/foundry -- nslookup foundryvtt.com
Server:         10.152.183.10
Address:        10.152.183.10:53

Non-authoritative answer:

Non-authoritative answer:
Name:   foundryvtt.com
Address: 18.237.161.213

~/foundry$ k logs --follow pod/foundry-546d7c966c-pspn8
Entrypoint | 2022-01-11 04:11:35 | [info] Starting felddy/foundryvtt container v9.242.0
Entrypoint | 2022-01-11 04:11:35 | [info] No Foundry Virtual Tabletop installation detected.
Entrypoint | 2022-01-11 04:11:35 | [info] Using FOUNDRY_USERNAME and FOUNDRY_PASSWORD to authenticate.
Authenticate | 2022-01-11 04:11:36 | [info] Requesting CSRF tokens from https://foundryvtt.com
Authenticate | 2022-01-11 04:11:41 | [error] Unable to authenticate: request to https://foundryvtt.com/ failed, reason: getaddrinfo EAI_AGAIN foundryvtt.com

BitRacer avatar Jan 11 '22 04:01 BitRacer

This is probably related to, or the same as, issue #135

@BitRacer could you take a look through that issue and see if any of the behaviors you are seeing are similar.

felddy avatar Jan 11 '22 18:01 felddy

This is probably related to, or the same as, issue #135

@BitRacer could you take a look through that issue and see if any of the behaviors you are seeing are similar.

I did look at that one before I opened this. I'm not sure if it is related since I do appear to be able to run an nslookup from the container. My CoreDNS config looks very similar as well.

~/foundry$ k describe configmap coredns -n kube-system
Name:         coredns
Namespace:    kube-system
Labels:       addonmanager.kubernetes.io/mode=EnsureExists
              k8s-app=kube-dns
Annotations:  <none>

Data
====
Corefile:
----
.:53 {
    errors
    health {
      lameduck 5s
    }
    ready
    log . {
      class error
    }
    kubernetes cluster.local in-addr.arpa ip6.arpa {
      pods insecure
      fallthrough in-addr.arpa ip6.arpa
    }
    prometheus :9153
    forward . 8.8.8.8 8.8.4.4
    cache 30
    loop
    reload
    loadbalance
}


BinaryData
====

Events:  <none>

BitRacer avatar Jan 11 '22 18:01 BitRacer