James D. Marble

Results 11 comments of James D. Marble

I'm seeing the same issue running in Kubernetes. Might be related to [this bug](https://github.com/gliderlabs/docker-alpine/issues/539) in Alpine. Edit: scratch that. I rebuilt using `node:12-alpine3.10` and still had the problem.

I [ported to node:12-slim](https://gitlab.com/jdmarble/foundryvtt-docker/-/commit/0e4f4b71edba3f95e8e101d1ceb6d72edb572be3) to successfully work around the problem. I'm running into a lot of DNS issues on alpine based images. Not sure if it's my k8s cluster's configuration,...

I expected the Debian (even slim) based image to be larger than the Alpine one. I was surprised, although I'm not sure I can trust the results because I don't...

I've resolved the DNS issue I've been having while running this and other Alpine based images in Kubernetes clusters on my network. _Short answer_: I turned off DNSSEC for my...

I got this working with some Helm chart overrides: ```yaml nodeSelector: beta.kubernetes.io/arch: arm64 pushgateway: tag: v1.2.0 cleaner: registry: rancher repository: kubectl tag: v1.17.0 ``` I think it would be possible...

Just hit this myself. Tests working on all of my machines except the one that I just spooled up. No changes between boxes. Strange!

I'm working around the problem by deleting the entire cluster and starting over. :laughing:

I'm blocked on this too. Doesn't seem to matter what facility I specify, the response is always about `hk2`: `422 No available servers with plan m3.small.x86 in facility hk2` ....

Still getting this panic with more than one `$patch: delete` in v5.4.3.

I'm running into this problem without zarf but using kustomize to overlay image digests on the default manifests from https://raw.githubusercontent.com/longhorn/longhorn/v1.7.2/deploy/longhorn.yaml . In my case, I can have kustomize also replace...