Capturing gunicorn process info that is running on a separate pod in kubernetes
Hello, I am confused on how my dd-agents are supposed to be able to get gunicorn information, since the dd-agents and my gunicorn back-end are running on separate pods (I am using kubernetes).
My Pod setup is looks like this (I used your daemonset to deploy the dd-agents):
NAME READY STATUS RESTARTS AGE
dd-agent-oo9wv 1/1 Running 0 3d
dd-agent-rla1m 1/1 Running 0 3d
dd-agent-sn51m 1/1 Running 0 3d
postgres 1/1 Running 0 14d
redis-standalone-be5r9 1/1 Running 0 13d
rob-3186572392-vga66 1/1 Running 0 3d
scheduler-531250947-bp29m 1/1 Running 0 12d
worker-876690324-ua652 1/1 Running 0 12d
etcd0 1/1 Running 0 16h
etcd1 1/1 Running 0 16h
etcd2 1/1 Running 0 16h
The gunicorn process is running in rob-3186572392-vga66
root 183 0.0 0.4 93996 19076 ? S Oct14 0:38 gunicorn: master [rob_backend]
Obviously my dd-agents are all outputing that they can't find the gunicorn process:
gunicorn
--------
- instance #0 [ERROR]: 'Found no master process with name: gunicorn: master [rob_backend]'
- Collected 0 metrics, 0 events & 2 service checks
- Dependencies:
- psutil: 3.3.0
Many thanks
Hi @jorgenbs Thank you for the detailed report. Have you tried using the agent's service discovery? It's made specifically for this kind of setup. You will find documentation about it here: http://docs.datadoghq.com/guides/servicediscovery/
Let me know if you have any issue with it.
I guess I need to re-read it since I didn't think it solved my problem. Due to the fact that the gunicorn integration looks for an actual process name running on the system and not a service host (like postgres, redis etc do).
Ah nevermind, I replied too quickly. Reaching processes from another container (even in the same pod) is not yet possible in Kubenetes (it's actually a docker limitation). So besides shipping the Datadog agent in the same container as gunicorn I don't think we support running the gunicorn check against a containerized gunicorn instance.
@hkaj thanks for replying and confirming my suspicions.
I guess one possible solution is to keep the cluster agents, and add an additional dd-agent that runs inside the docker-image of my gunicorn backend. I guess this would mean an additional $15/month because it would count as an extra host (right?) - but aside from that are there any downsides?
ever come to a solution on this?