π Bug Report: Trace-ID Injection Missing Between LangChain and vLLM Spans
Which component is this bug for?
Langchain Instrumentation
π Description
I am using the OpenLLMetry LangchainInstrumentor to instrument VLLMOpenAI in LangChain. Both vLLM and the LangchainInstrumentor are configured to export traces to the same trace collector.
However, the spans from vLLM and LangChain appear in separate traces. It seems that LangchainInstrumentor is not injecting its trace-id into the requests sent to vLLM. Is there a way to enable this?
π Reproduction steps
- Run Jaeger as the trace collector
docker run --rm --name jaeger \
-e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 4317:4317 \
-p 4318:4318 \
-p 14250:14250 \
-p 14268:14268 \
-p 14269:14269 \
-p 9411:9411 \
jaegertracing/all-in-one:1.57
- Run vLLM
export OTEL_SERVICE_NAME="vllm-server"
export OTEL_EXPORTER_OTLP_TRACES_INSECURE=true
vllm serve facebook/opt-125m --otlp-traces-endpoint "grpc://localhost:4317"
- Run LangChain with
LangchainInstrumentor:
from opentelemetry.instrumentation.langchain import LangchainInstrumentor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter
from opentelemetry.trace import set_tracer_provider
from langchain_community.llms import VLLMOpenAI
trace_provider = TracerProvider(resource=Resource(attributes={"service.name": "LangChain"}))
trace_provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter(endpoint="grpc://localhost:4317", insecure=True)))
trace_provider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter()))
set_tracer_provider(trace_provider)
LangchainInstrumentor().instrument()
llm = VLLMOpenAI(
openai_api_key="EMPTY",
openai_api_base="http://localhost:8000/v1",
model_name="facebook/opt-125m",
)
print(llm.invoke("Rome is"))
π Expected behavior
I should see in Jaeger WebUI one trace with 2 spans.
π Actual Behavior with Screenshots
I see 2 traces, each with a single span.
http://localhost:16686/
π€ Python Version
No response
π Provide any additional context for the Bug.
It seems related to https://github.com/traceloop/openllmetry/discussions/547 but couldn't find a solution.
π Have you spent some time to check if this bug has been raised before?
- [X] I checked and didn't find similar issue
Are you willing to submit PR?
None
Hey there, @ronensc! I'm here to help with any bugs, questions, or contributions you have in mind. Let's tackle this challenge together!
You can enable LangchainInstrumentor to inject its trace-id into the requests sent to vLLM so that the spans from vLLM and LangChain appear in the same trace. This can be achieved by setting the context of the parent span when creating a new span for the request.
Here is an example of how you can modify the _create_llm_span method to include the trace context:
def _create_llm_span(
self,
run_id: UUID,
parent_run_id: Optional[UUID],
name: str,
request_type: LLMRequestTypeValues,
metadata: Optional[dict[str, Any]] = None,
) -> Span:
workflow_name = self.get_workflow_name(parent_run_id)
entity_path = self.get_entity_path(parent_run_id)
if parent_run_id is not None and parent_run_id in self.spans:
parent_span_context = set_span_in_context(self.spans[parent_run_id].span)
else:
parent_span_context = None
span = self.tracer.start_span(
f"{name}.{request_type.value}",
context=parent_span_context,
kind=SpanKind.CLIENT,
attributes={
SpanAttributes.LLM_SYSTEM: "Langchain",
SpanAttributes.LLM_REQUEST_TYPE: request_type.value,
SpanAttributes.TRACELOOP_WORKFLOW_NAME: workflow_name,
SpanAttributes.TRACELOOP_ENTITY_PATH: entity_path,
},
metadata=metadata,
)
return span
In this code, the context parameter is set to the context of the parent span if it exists. This ensures that the new span is part of the same trace as the parent span [1].
To continue talking to Dosu, mention @dosu.