Ditto api url is not opening
Hello the ditto api url is not opening
iffroot4@iff-dt:~$ echo $DITTO_API_BASE_URL http://10.104.54.99:8080
I guess you were installing the cloud2edge chart here.
I've just tested the cloud2edge chart installation in minikube and see ditto containers not getting ready. In the ditto-gateway container, I see errors like
{"timestamp":"2024-05-06T22:46:06.635+02:00","version":"1","message":"Probing [http://10-244-10-40.cloud2edge.pod.cluster.local:7626/bootstrap/seed-nodes] failed due to: Probing timeout of [http://10-244-10-40.cloud2edge.pod.cluster.local:7626]","logger_name":"org.apache.pekko.management.cluster.bootstrap.internal.HttpContactPointBootstrap","thread_name":"ditto-cluster-pekko.actor.default-dispatcher-7","level":"WARN","level_value":30000,"sourceThread":"ditto-cluster-pekko.actor.default-dispatcher-13","pekkoAddress":"pekko://[email protected]:2551","pekkoSource":"pekko://[email protected]:2551/system/bootstrapCoordinator/contactPointProbe-10-244-10-40.cloud2edge.pod.cluster.local-7626"}
I haven't seen this when previously installing this cloud2edge chart version.
The issue also occurs when just installing the ditto chart in the version used by the c2e chart (helm install -n ditto my-ditto oci://registry-1.docker.io/eclipse/ditto --version 3.4.4 --wait).
I'll be doing some further tests here, also integrating the latest ditto chart version.
As it turned out, there was a general issue with DNS resolution in my minikube cluster, unrelated to the helm charts. It caused host names like 10-244-10-40.cloud2edge.pod.cluster.local to be not resolved correctly. After fixing this (had to change the /etc/resolv.conf via minikube ssh), the ditto pods got ready again.
@charanhs123 To see what the issue is in your case, you can first check the status of the ditto pods:
$ kubectl get pods -n cloud2edge
NAME READY STATUS RESTARTS AGE
c2e-ditto-connectivity-98cc4b5ff-m2wvw 1/1 Running 0 15m
c2e-ditto-dittoui-d5f4fb467-n9v7b 1/1 Running 0 15m
c2e-ditto-gateway-9d5b747c4-hgkdc 1/1 Running 0 15m
c2e-ditto-nginx-556f75787f-bpwbd 1/1 Running 0 15m
c2e-ditto-policies-6b4b595cbb-tk4gr 1/1 Running 0 15m
c2e-ditto-swaggerui-69dbffb744-vvsgs 1/1 Running 0 15m
c2e-ditto-things-65dd85dc98-x4nk2 1/1 Running 0 15m
c2e-ditto-thingssearch-f6bd5f559-gcdvg 1/1 Running 0 15m
[...]
If pod containers are not ready (especially the ditto-gateway), you can run
kubectl describe pod -n cloud2edge c2e-ditto-gateway-9d5b747c4-hgkdc
and check the container status, e.g.
Containers:
ditto-gateway:
Container ID: docker://011c47291dae3781dff897b82a9cd8ee246c1167d97357d1f9904f78dc858a36
Image: docker.io/eclipse/ditto-gateway:3.4.0
Image ID: docker-pullable://eclipse/ditto-gateway@sha256:e54f9d234437df3cb96afe262a4feaa2b500af7dda91574a6abbfb6411bbf681
Ports: 8080/TCP, 2551/TCP, 7626/TCP, 9095/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
State: Running
Started: Thu, 09 May 2024 14:15:57 +0200
Last State: Terminated
Reason: Error
Exit Code: 143
Started: Thu, 09 May 2024 13:50:30 +0200
Finished: Thu, 09 May 2024 14:15:28 +0200
Ready: True
and the events at the bottom of the output. And check the logs of the pod:
kubectl logs -n cloud2edge c2e-ditto-gateway-9d5b747c4-hgkdc
searching for "level":"ERROR" in there.
(Also be sure that you have enough memory/disk space for your kubernetes cluster (see requirements here.)