openllmetry icon indicating copy to clipboard operation
openllmetry copied to clipboard

feat(anthropic): add instrumentation for Anthropic tool calling

Open peachypeachyy opened this issue 1 year ago • 4 comments

Fixes #879

  • [x] I have added tests that cover my changes.
  • [ ] If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
  • [x] PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
  • [ ] (If applicable) I have updated the documentation accordingly.

peachypeachyy avatar Jun 21 '24 04:06 peachypeachyy

@nirga The response of tool_use as the finish reason is already captured here:

When you say instrumentation of tool_use, do you mean setting the response attributes of the Chain of Thought? i.e setting response attributes to something like this:

{
      "type": "tool_use",
      "id": "toolu_01A09q90qw90lq917835lq9",
      "name": "get_weather",
      "input": {"location": "San Francisco, CA", "unit": "celsius"}
}

peachypeachyy avatar Jun 22 '24 02:06 peachypeachyy

@nirga if I have to set the response attributes as mentioned above? Which spans do I use?

I ask because the only response attribute within the opentelemetry-semconv-ai package I have is LLM_RESPONSE_MODEL

I do see LLM_RESPONSE_FINISH_REASON and LLM_RESPONSE_ID commented out as seen below, but I'm not sure If I can uncomment and use them or create new ones such as LLM_RESPONSE_TOOLS because the code for this is in the opentelemetry.semconv.ai package and not in our openllmetry codebase. correct me if I am mistaken here. 😄

class SpanAttributes:
    # Semantic Conventions for LLM requests, this needs to be removed after
    # OpenTelemetry Semantic Conventions support Gen AI.
    # Issue at https://github.com/open-telemetry/opentelemetry-python/issues/3868
    # Refer to https://github.com/open-telemetry/semantic-conventions/blob/main/docs/gen-ai/llm-spans.md
    # for more detail for LLM spans from OpenTelemetry Community.
    LLM_SYSTEM = "gen_ai.system"
    LLM_REQUEST_MODEL = "gen_ai.request.model"
    LLM_REQUEST_MAX_TOKENS = "gen_ai.request.max_tokens"
    LLM_REQUEST_TEMPERATURE = "gen_ai.request.temperature"
    LLM_REQUEST_TOP_P = "gen_ai.request.top_p"
    LLM_PROMPTS = "gen_ai.prompt"
    LLM_COMPLETIONS = "gen_ai.completion"
    LLM_RESPONSE_MODEL = "gen_ai.response.model"
    LLM_USAGE_COMPLETION_TOKENS = "gen_ai.usage.completion_tokens"
    LLM_USAGE_PROMPT_TOKENS = "gen_ai.usage.prompt_tokens"
    # To be added
    # LLM_RESPONSE_FINISH_REASON = "gen_ai.response.finish_reasons"
    # LLM_RESPONSE_ID = "gen_ai.response.id"

peachypeachyy avatar Jun 22 '24 06:06 peachypeachyy

@peachypeachyy this should be similar to OpenAI. You're right btw, we're missing these from the semantic conventions.

nirga avatar Jun 22 '24 08:06 nirga

@nirga done, I have captured spans for tool_use in response as well, I have used SpanAttributes.LLM_COMPLETIONS as that is what was used in the OpenAI code.

peachypeachyy avatar Jun 23 '24 05:06 peachypeachyy

@nirga @peachypeachyy What's the status of the work here? If this is stale, we are happy to contribute from the newest main branch.

dinmukhamedm avatar Oct 16 '24 02:10 dinmukhamedm

@dinmukhamedm happy for a contribution! It's a small one and I think @peachypeachyy is no longer working on it

nirga avatar Oct 16 '24 07:10 nirga

Awesome, opened https://github.com/traceloop/openllmetry/pull/2150.

dinmukhamedm avatar Oct 16 '24 07:10 dinmukhamedm