Cannot specify namespace to use for label selectors with `k8s_custom_deploy`
Expected Behavior
- Possible to rely on label selectors with
k8s_custom_deploythat does not return any YAML fromapply_cmd
Current Behavior
- We only watch for events in namespaces that we've seen a deployed object in, which we determine from result YAML, so if we never get any YAML, we won't watch any namespaces and label selectors will silently fail
- If you also apply resource(s) that DO use result YAML, assuming there's a namespace overlap, things will work implicitly
Steps to Reproduce
-
Create a
Tiltfilewith ak8s_custom_deploythat doesn't return YAML and relies on label selectors:k8s_custom_deploy( 'nginx', apply_cmd='kubectl apply -f app.yaml 1>&2', delete_cmd='kubectl delete -f app.yaml', deps=['app.yaml'] ) k8s_resource('nginx', extra_pod_selectors={'app': 'web'}, discovery_strategy='selectors-only')app.yaml:apiVersion: apps/v1 kind: Deployment metadata: name: web spec: selector: matchLabels: app: web replicas: 1 template: metadata: labels: app: web spec: containers: - name: web image: nginx resources: limits: cpu: 100m memory: 128Mi ports: - containerPort: 80 readinessProbe: httpGet: port: 80 failureThreshold: 1 periodSeconds: 10 -
Run
tilt up -
Run
kubectl get pod -l app=webto see that pod exists and is healthy -
Observe that Tilt UI shows it as perpetually pending / has no logs / is unaware of Pod's existence
Context
I think the most logical thing here is to add an optional namespace arg to k8s_resource to explicitly use in the KubernetesDiscoveryTemplateSpec + KubernetesDiscoverySpec in addition to(?) any implicitly discovered namespaces. It should probably be an error to set this without also populating extra_pod_selectors because otherwise it's meaningless.
Alternatively, we could have a "magic" key in extra_pod_selectors that's something like $namespace (not a valid K8s label identifier, so no risk of overlap) to use for this purpose without changing the API.
We also currently eagerly watch any namespaces based on the spec YAML; in the case of a spec apply cmd, we could eagerly watch the "default" namespace in that case, but that only works if that's the namespace the apply cmd actually uses, which isn't a given.
I've got a workaround if you want , you could specify the function
def deploy_yaml(service,namespace=default):
k8s_custom_deploy(
service,
apply_cmd=str('kubectl apply -f {service}.yaml -n {namespace} -o yaml'.format(service=service,namespace=namespace)),
delete_cmd=str('kubectl delete -f {service}.yaml -n {namespace}'.format(service=service,namespace=namespace)),
deps=[service],
)
and call it as
deploy_yaml("my-service","my-namespace")
notice the very important -o yaml in the apply command
The UI shows this
Deploying
Running cmd: kubectl apply -f dep/my-service.yaml -n my-namespace -o yaml
Objects applied to cluster:
→ my-service:deploy