APM socket not created
Output of the info page (if this is a bug)
We installed the operator helm chart, created the secret and deployed the agent with the following datadog-agent.yaml:
apiVersion: datadoghq.com/v1alpha1
kind: DatadogAgent
metadata:
name: datadog
spec:
credentials:
apiSecret:
secretName: datadog-secret
keyName: api-key
appSecret:
secretName: datadog-secret
keyName: app-key
site: datadoghq.eu
agent:
image:
name: "gcr.io/datadoghq/agent:latest"
apm:
enabled: true
log:
enabled: true
process:
enabled: true
config:
collectEvents: true
clusterAgent:
image:
name: "gcr.io/datadoghq/cluster-agent:latest"
config:
externalMetrics:
enabled: true
admissionController:
enabled: true
mutateUnlabelled: true
Following the docs, we would expect that the /var/run/datadog/apm.socket is created.
Screenshot of the docs:

Describe what happened:
Only the /var/run/datadog/statsd/statsd.sock is created and mounted.
Describe what you expected:
- The socket
/var/run/datadog/apm.socketbeing created and mounted. - The socket
/var/run/datadog/statsd/statsd.sockto be created on the default path/var/run/datadog/dsd.socket - The configuration variables
agent.apm.hostSocketPathandagent.apm.socketPathto be included in the list https://docs.datadoghq.com/containers/kubernetes/operator_configuration/.
Steps to reproduce the issue:
Additional environment details (Operating System, Cloud provider, etc): Kubernetes: v1.22.11 Cloud provider: AWS Datadog-operator chart: 0.8.6
I also tried adding the undocumented setting: spec.agent.apm.unixDomainSocket.enabled: true based on this reported issue: https://github.com/DataDog/datadog-operator/issues/585, without success.
Can this issue get some traction? We've hit the exact same problem and APM does not work without it which considering that Datadog now recommend datadog-operator as the preferred approach is poor really when the alternative is to not use the unix socket and instead open up port 8126 as a host port on the nodes which is insecure
Hello, sorry this issue didn't get any attention so far!
The original issue refers to Operator 0.8.6 beta version. We made Operator 1.0 generally available in April and at this point not prioritizing bugs in earlier versions. We recommend migrating to 1.0, please check the migration guide here.
@AdamS3x could you please provide details about your test environment and share the manifest you are using?