Varnish does not start
Describe the bug Varnish does not start. No errors in pod's logs
To Reproduce Steps to reproduce the behavior:
- Install Helm chart 0.7.2
- Use values:
# Default values for kube-httpcache.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: quay.io/mittwald/kube-httpcache
pullPolicy: IfNotPresent
tag: "stable"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
# Enable StatefulSet (Deployment is default)
useStatefulset:
enabled: false
# Enable configMap for Varnish Template File (see below vclTemplate)
# OR use extravolume with name "template" if the file is too big
configmap:
enabled: true
# kube-httpcache specific configuration
cache:
# name of frontend service
# frontendService: kube-httpcache-headless
# name of backend service
backendService: backend-service
# name of backend service namespace
# backendServiceNamespace: backend-service-namespace
# watching for frontend changes is true by default
frontendWatch: true
# watching for backend changes is true by default
backendWatch: true
# Varnish storage backend type (https://varnish-cache.org/docs/trunk/users-guide/storage-backends.html)
varnishStorage: malloc # default,malloc,umem,file...
# Varnish storage backend size
storageSize: 128M # K(ibibytes), M(ebibytes), G(ibibytes), T(ebibytes) ... unlimited
# Varnish transient storage backend type (https://varnish-cache.org/docs/trunk/users-guide/storage-backends.html)
#varnishTransientStorage: malloc
# Varnish transient storage backend size
#transientStorageSize: 128M # K(ibibytes), M(ebibytes), G(ibibytes), T(ebibytes) ... unlimited
# Secret for Varnish admin credentials
secret: "HIDDEN"
# Read admin credentials from user provided secret
#existingSecret: kubecache-secret
cacheExtraArgs: {}
# cacheExtraArgs: |
# - -v=8
# - -varnish-additional-parameters=vcc_allow_inline_c=on
serviceAccount:
# Specifies whether a service account should be created
enabled: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
rbac:
enabled: true
# create a prometheus operator ServiceMonitor
serviceMonitor:
enabled: false
additionalLabels: {}
## Scrape interval. If not set, the Prometheus default scrape interval is used.
interval: 10s
## Scrape Timeout. If not set, the Prometheus default scrape timeout is used.
scrapeTimeout: ""
podSecurityPolicy:
enabled: false
# name: unrestricted-psp
annotations: {}
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
lifecycle: {}
# preStop:
# exec:
# command:
# - /bin/sh
# - -c
# - touch /etc/varnish/fail_probes; sleep 25
topologySpreadConstraints: {}
# - topologyKey: topology.kubernetes.io/zone
# maxSkew: 1
# whenUnsatisfiable: ScheduleAnyway
# labelSelector:
# matchLabels:
# app.kubernetes.io/name: kube-httpcache
# - topologyKey: kubernetes.io/hostname
# maxSkew: 1
# whenUnsatisfiable: ScheduleAnyway
# labelSelector:
# matchLabels:
# app.kubernetes.io/name: kube-httpcache
initContainers: {}
# initContainers: |
# - args:
# - -c
# - |
# echo "Copying external varnish template from..."
# command:
# - sh
# image: busybox:latest
# imagePullPolicy: IfNotPresent
# name: varnishtemplate
# resources: {}
# terminationMessagePath: /dev/termination-log
# terminationMessagePolicy: File
# volumeMounts:
# - name: template
# mountPath: /etc/varnish/tmpl
extraContainers: []
# - name: my-sidecar
# image: myapp/my-sidecar
# command:
# - my-sidecar-command
extraVolumes: {}
# extraVolumes:
# - emptyDir: {}
# name: template
extraMounts: {}
# extraMounts:
# - name: geoip
# mountPath: /var/lib/geoip
extraEnvVars: {}
#extraEnvVars:
# - name: foo
# value: bar
extraEnvFromConfig: {}
#extraEnvFromConfig:
# - configMapRef:
# name: my-configmap-name
# - secretRef:
# name: my-secret-name
exporter:
enabled: false
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
resources: {}
livenessProbe: {}
# livenessProbe:
# httpGet:
# path: /
# port: 6083
readinessProbe: {}
service:
type: ClusterIP
port: 80
target: 8080
ingress:
enabled: false
annotations: {}
# kubernetes.io/tls-acme: "true"
className: nginx
hosts: []
# hosts:
# - host: www.example.com
# paths:
# - path: /
# pathType: Prefix
# backend:
# service:
# name: kube-httpcache
# port:
# number: 80
# - path: /backend
# backend:
# name: backend-service
# port:
# number: 8080
# - host: www2.example.com
# paths:
# - path: /
# pathType: Prefix
# backend:
# name: kube-httpcache
# port:
# number: 80
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
#terminationGracePeriodSeconds: 60
affinity: {}
livenessProbe: {}
# livenessProbe:
# httpGet:
# path: /
# port: 6083
readinessProbe: {}
vclTemplate: |
vcl 4.0;
import std;
import directors;
// ".Frontends" is a slice that contains all known Varnish instances
// (as selected by the service specified by -frontend-service).
// The backend name needs to be the Pod name, since this value is compared
// to the server identity ("server.identity" [1]) later.
//
// [1]: https://varnish-cache.org/docs/6.4/reference/vcl.html#local-server-remote-and-client
{{ range .Frontends }}
backend {{ .Name }} {
.host = "{{ .Host }}";
.port = "{{ .Port }}";
}
{{- end }}
{{ range .Backends }}
backend be-{{ .Name }} {
.host = "{{ .Host }}";
.port = "{{ .Port }}";
}
{{- end }}
sub vcl_init {
new cluster = directors.hash();
{{ range .Frontends -}}
cluster.add_backend({{ .Name }}, 1);
{{ end }}
new lb = directors.round_robin();
{{ range .Backends -}}
lb.add_backend(be-{{ .Name }});
{{ end }}
}
sub vcl_recv
{
# Set backend hint for non cachable objects.
set req.backend_hint = lb.backend();
# ...
# Routing logic. Pass a request to an appropriate Varnish node.
# See https://info.varnish-software.com/blog/creating-self-routing-varnish-cluster for more info.
unset req.http.x-cache;
set req.backend_hint = cluster.backend(req.url);
set req.http.x-shard = req.backend_hint;
if (req.http.x-shard != server.identity) {
return(pass);
}
set req.backend_hint = lb.backend();
# ...
return(hash);
}
- Pod's logs are
I0601 13:07:34.640328 1 main.go:31] running kube-httpcache with following options: {Kubernetes:{Config: RetryBackoffString:30s RetryBackoff:30s} Frontend:{Address:0.0.0.0 Port:8080 Watch:true Namespace:kube-httpcache Service:kube-httpcache PortName:http} Backend:{Watch:true Namespace:kube-httpcache Service:backend-service Port: PortName:http} Signaller:{Enable:true Address:0.0.0.0 Port:8090 WorkersCount:1 MaxRetries:5 RetryBackoffString:30s RetryBackoff:30s QueueLength:0 MaxConnsPerHost:-1 MaxIdleConns:-1 MaxIdleConnsPerHost:-1 UpstreamRequestTimeoutString: UpstreamRequestTimeout:0s} Admin:{Address:0.0.0.0 Port:6083} Varnish:{SecretFile:/etc/varnish/k8s-secret/secret Storage:malloc,128M TransientStorage:malloc,128m AdditionalParameters: VCLTemplate:/etc/varnish/tmpl/default.vcl.tmpl VCLTemplatePoll:false WorkingDir:} Readiness:{Enable:true Address:0.0.0.0:9102}}
I0601 13:07:34.640398 1 main.go:38] using in-cluster configuration
I0601 13:07:34.641165 1 run.go:15] waiting for initial configuration before starting Varnish
It hangs on waiting for initial configuration before starting Varnish
Expected behavior Varnish starts
Environment:
- Kubernetes version: 1.24, EKS
- kube-httpcache version: 0.7.2
Thanks for the report -- This is odd; after the "waiting for initial configuration before starting Varnish" message, the controller should watch the backend endpoints, and render its configuration based on the observed endpoints.
It's been a few year since I've last touched that particular piece of code, but I'm wondering if there's an initial GET missing when setting up the endpoint watch:
https://github.com/mittwald/kube-httpcache/blob/60f555d269876133cd5d7a22dddf1f7f8874454c/pkg/watcher/endpoints_watch.go#L23-L27
Otherwise, the entire startup of the controller would depend on the endpoints object being updated (by chance) to actually start. I'll investigate some more.
I was able to confirm this issue, in the case (and only then) when the watched backend service did either not exist or have any endpoints. Can you confirm that your backend service both exists and has endpoints when the issue occurs?
I'm facing the same problem. I could noticed the /tmp/vcl file is properly set with the backends and the kubectl get endpoints is properly listing the backend services' ips.
Note: I'm deploying all the files in a single file. I'll try to do it individually after.
I'm facing the same problem. I could noticed the
/tmp/vclfile is properly set with the backends and thekubectl get endpointsis properly listing the backend services' ips.Note: I'm deploying all the files in a single file. I'll try to do it individually after.
Nevermind. I was with syntax problems in my Varnish settings.