Autogenerated upstream from (service) backend does not match name from CRD: upstream doesn't exist
Current State
I could find no documentation about how to do this. My understanding was, that the upstream should be named the same as the referenced service and be in the same namespace but I still get:
ApisixIngress synced failed, with error: upstream doesn't exist. It will be created after ApisixRoute is created referencing it.
For a concrete example:
apiVersion: apisix.apache.org/v2
kind: ApisixUpstream
metadata:
name: bitter
spec:
timeout:
read: 180s
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
name: bitter
spec:
http:
- backends:
- serviceName: bitter
servicePort: 80
match:
hosts:
- bitter.test
paths:
- /*
name: route-open
I can see that this backend creates an upstream called default_bitter_80 but the upstream config is not applied (as told by the error message). Similarly, I can't name the Upstream default_bitter_80 because K8s forbids _, so how am I supposed to get this to match?
Update
When I tried to use another name I get the error message ApisixIngress synced failed, with error: service "bitter--80" not found, so it seems the name was right but the upstream was still not matched
APISIX Version: 3.9.1
Desired State
Describe how one can configure the autogenerated upstream from a backend route.
I have run into upstream doesn't exist. It will be created after ApisixRoute is created referencing it. when kubectl apply an upstream manifest. But once I created the service with route in it pointing to the upstream, I can apply the ApisixUpstream manifest without issue.
All I did was I made sure these correspond:
and that you create the ApisixRoute first and ApisixUpstream second.
I deleted the upstream and upgraded the helm chart that creates both route and upstream. Still the same error despite the route existing before the upstream.
I am also wondering how the matching should work, considering that the full upstream name contains the port number but the upstream CRD does not.
I reproduced this on a local test cluster with log_level: debug in the ingress controller and got this additional message:
debug apisix/upstream.go:43 try to look up upstream {"name": "default_bitter_80_service"
However, as I said, the actual upstream name is default_bitter_80, so no wonder there is no match.
Please note that I am relying on the bitnami chart version 3.3.9.
With this I also think that this is not a docs issue (though the docs around this could be improved) but an actual bug.
This used to work with APISIX 3.5.0 (from bitnami chart version 2.1.1)
This issue has been marked as stale due to 90 days of inactivity. It will be closed in 30 days if no further activity occurs. If this issue is still relevant, please simply write any comment. Even if closed, you can still revive the issue at any time or discuss it on the [email protected] list. Thank you for your contributions.
I would still like to get this working as it prevents upgrades.
I'm having the same issue. ApisixUpstream is not working in the latest version.
I'm having the same issue. ApisixUpstream is not working in the latest version.
any updates? @renanramonh
I'm experiencing what appears to be a similar issue. The logs show an error similar to what @martin-schulze-e2m mentioned where one of the logged errors says it failed to get the upstream where the queried name has _service appended but the upstream which was created when the ApisixRoute was deployed does not have the _service appended.
Interestingly, the upstream does successfully get updated with the healthCheck.active.httpPath specified in the associated ApisixUpstream, so it seems like the APISIX Ingress Controller might be updating things successfully using the correct name, but perhaps is also trying using other conventions like the _service suffixed name, hash id, etc? 🤔
[!WARNING]
While that wouldn't be a major issue if the Apisix Ingress Controller tried the additional conventions a handful of times, it seems like it will endlessly retry and fills up theapisix-ingress-controllerlogs.Furthermore, the ApisixUpstream status remains in error and there's an associated ResourceSyncAborted event.
ApisixUpstream: status
status:
conditions:
- message: not found
observedGeneration: 1
reason: ResourceSyncAborted
status: "False"
type: ResourcesAvailable
Event: ResourceSyncAborted
apiVersion: v1
count: 2583
eventTime: null
firstTimestamp: "2025-02-11T17:59:11Z"
involvedObject:
apiVersion: apisix.apache.org/v2
kind: ApisixUpstream
name: httpbin-ngx-svc
namespace: default
resourceVersion: "10569840"
uid: c25a18d3-31b5-489c-a8fa-3c7c25cc59d2
kind: Event
lastTimestamp: "2025-02-12T02:34:11Z"
message: 'ApisixIngress synced failed, with error: not found'
metadata:
creationTimestamp: "2025-02-11T17:59:11Z"
name: httpbin-ngx-svc.1823392e1e986cd6
namespace: default
resourceVersion: "10688404"
uid: 05f4b8a8-b5e4-4fc3-b86e-d0d00bce546b
reason: ResourceSyncAborted
reportingComponent: ApisixIngress
reportingInstance: ""
source:
component: ApisixIngress
type: Warning
For reference, I verified the upstreams by port forwarding 9180 from the apisix-admin service and doing a simple curl against the admin API like:
curl http://127.0.0.1:9180/apisix/admin/upstreams -H 'X-API-Key: {YOUR-ADMIN-KEY}' | jq .
(where the default admin key is edd1c9f034335f136f87ad84b625c8f1 if you haven't changed it)
Logs
warn apisix/apisix_upstream.go:489 sync ApisixUpstream failed, will retry {"object": {"Type":4,"Object":{"Key":"default/httpbin-ngx-svc","OldObject":null,"GroupVersion":"apisix.apache.org/v2"},"OldObject":null,"Tombstone":null}, "error": "not found"}
error apisix/apisix_upstream.go:333 failed to get upstream default_httpbin-ngx-svc_80_service: not found
warn apisix/cluster.go:1164 upstream not found {"id": "e9357f4d", "url": "http://apisix-admin.ingress-apisix.svc.cluster.local:9180/apisix/admin/upstreams/e9357f4d", "cluster": "default"}
Example Resources
app-hello-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin-ngx-deploy
labels:
app: httpbin-ngx
spec:
replicas: 3 # Adjust the number of replicas as needed
selector:
matchLabels:
app: httpbin-ngx
template:
metadata:
labels:
app: httpbin-ngx
spec:
containers:
- name: httpbin-ngx
image: nginxdemos/hello:latest
app-hello-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: httpbin-ngx-svc
labels:
app: httpbin-ngx # label for better resource grouping and querying
spec:
selector:
app: httpbin-ngx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
apisix-route-test.yaml
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
name: route-hello-test
spec:
http:
- name: hello-test
match:
hosts:
- hello-test.demo.com
paths:
- /*
# Use a Kubernetes Service directly as the backend
backends:
- serviceName: httpbin-ngx-svc
servicePort: 80
apisix-upstream-test.yaml
apiVersion: apisix.apache.org/v2
kind: ApisixUpstream
metadata:
name: httpbin-ngx-svc # match the service name
spec:
healthCheck:
active:
type: http # default: http
httpPath: /foo?health=4
@Revolyssup and @shreemaan-abhishek is there a different communication channel that would be preferred to track down this issue?
It causes the apisix-ingress-controller to endlessly log warnings about the upstream not found along with an error that it failed to get the upstream with _service appended to the lookup (the created upstream does not have _service appended to it).
Can someone confirm if this issue is solved? if so in which version? Thanks
Hi. Any updates ?
Hi everyone, we have released a new 2.x version 2.0.0-rc3. The old 1.x version may no longer be maintained. In the new version, when ApisixRoute creates a route, it generates an upstream for each backend. If there is an ApisixUpstream resource with the same name as backend.ServiceName, the configuration of the ApisixUpstream (load balancing, health check, etc.) is automatically used instead of the default upstream configuration.
If there is still a problem, please open it again.