MLServer icon indicating copy to clipboard operation
MLServer copied to clipboard

Examples/clarification over using OpenTelemetry tracing in MLServer

Open danielsoutar opened this issue 2 years ago • 0 comments

Hi there - the docs show very little information for how to incorporate OpenTelemetry tracing in MLServer, added in this PR.

If for instance I’m deploying this model server within K8s and Docker, what would the setup look like for the default collector? I have a FastAPI app that pings the model server and I’d like the tracing to work across both services so I can analyse a request end-to-end. In the settings.json file for instance I have this:

{
    “debug”: true,
    “tracing_server”: “0.0.0.0:8000”
}

And in the Dockerfile of the FastAPI app:

…
# Variables taken from
# https://opentelemetry.io/docs/instrumentation/python/automatic/agent-config/
ENV OTEL_TRACES_EXPORTER=none
ENV OTEL_METRICS_EXPORTER=none
ENV OTEL_SERVICE_NAME=myapp
ENV OTEL_PROPAGATORS="b3,tracecontext,baggage"
ENV OTEL_EXPORTER_OTLP _PROTOCOL="http/protobuf"
ENV OTEL_PYTHON_EXCLUDED_URLS="healthz/.*,docs,docs/.*,.*/openapi.json"
…
CMD [“poetry”, “run”, “opentelemetry-instrument”, “uvicorn”, “--host”, “0.0.0.0”, “--log-level”, “info”, “src.main:app”]

Here I’d expect the default (local) collector to be listening on port 8000 (the default port for uvicorn) on the same host. This doesn’t work though - the logs don’t include any trace or span IDs.

Any clarification or examples greatly appreciated!

danielsoutar avatar Dec 15 '23 00:12 danielsoutar