trigger.dev icon indicating copy to clipboard operation
trigger.dev copied to clipboard

feat: Consider increasing otel attribute limits

Open Mortalife opened this issue 9 months ago • 3 comments

Is your feature request related to a problem? Please describe.

While using the Vercel AI SDK I noticed that the current value for the span/trace attribute length aren't sufficient to provide meaningful trace information from the AI SDK. This results in truncated logs.

I personally was using the LangfuseExporter and while the exporter was added successfully it suffered from the same limitations placed on traces for the trigger platform.

It would be great if a solution could be found to enable these SDK's to export the information in it's entirety so as to provide the information I'm looking for when debugging/monitoring my LLM applications.

Describe the solution you'd like to see

Considering these attributes will contain an entire message history, you're looking at limits vastly exceeding what you have set currently. The current limit doesn't even cover the system prompt. With that in mind, I can't really give you much of an idea of what the limits should look like other than "close to or exceeding model context windows".

Describe alternate solutions

A way to provide the raw spans to exporters without truncation.

Additional information

No response

Mortalife avatar Apr 22 '25 10:04 Mortalife

We could increase the client side limits while truncating attribute value lengths on the server to prevent excessive storage usage as a quick fix

ericallam avatar Apr 24 '25 17:04 ericallam

Following—also running into the same problem

robechun avatar Apr 25 '25 04:04 robechun

We could increase the client side limits while truncating attribute value lengths on the server to prevent excessive storage usage as a quick fix

I've moved back to doing my own tracing with the Langfuse client in the interim so this isn't blocking for me. But it would be good to get the full available tracing at some point.

Mortalife avatar Apr 25 '25 07:04 Mortalife

Bump; would rather not have to implement custom tracing with the langfuse client; is there a workaround I can use with trigger?

robechun avatar Apr 28 '25 16:04 robechun

Just to provide an overview how how I do the manual implementation currently.

At the top of my trigger function I create a trace:

    const trace = langfuse.trace({
      name: "my-trigger-task",
      sessionId: conversation_id, // Passed into my task trigger
      userId: user_id, // Passed into my task trigger
      input: message,
    });

I use Langfuse for managing my prompts:

    const prompt = await langfuse.getPrompt("my-trigger-task", undefined, {
      type: "text",
    });

I then start a generation:

    const generation = trace.generation({
      name: "chat-completion",
      model,
      prompt,
      input: message,
    });

The generation is used to to emit events, I also pass it to my tool generator functions, to allow access to the trace context when tools are called.

export const getMyTool= (parentSpan?: LangfuseSpanClient) =>
  tool({
    description: "Addition tool",
    parameters: z.object({
      value_one: z
        .number()
        .describe(
          "The first value"
        ),
      value_two: z
        .number()
        .describe(
          "The second value"
        ),
    }),
    execute: async ({ value_one, value_two }) => {
      const span = parentSpan?.span({
        name: "tool.getMyTool",
        input: {
          value_one,
          value_two 
        },
      });

      const output = value_one + value_two

      span?.end({
        output,
      });

   

      return output;
    },
  });
const { text, steps, toolCalls, usage } = await generateText({
  model,
  maxSteps: 10,
  system: prompt.compile({
    myVariable: 'test',
  }),
  messages: storedMessages,
  toolChoice: "required",
  tools: {
    myTool: getMyTool(generation),
  },
  async onStepFinish(step) {
    const {
      request,
      response,
      reasoning,
      reasoningDetails,
      toolCalls: calls,
      toolResults: results,
      ...stepRest
    } = step;

    // This is not necessarily required
    generation.event({
      name: "finished-step",
      input: {
        finishReason: step.finishReason,
        calls: calls,
      },
      output: {
        results: results.map((result) => ({
          name: result.toolName,
        })),
        text: step.text,
      },
      metadata: {
        ...stepRest,
      },
    });

  },
});

And at the end of my generation I pass in end result:

generation.end({
  output: storedMessages[storedMessages.length - 1],
  usage: {
    input: usage.promptTokens,
    output: usage.completionTokens,
    total: usage.totalTokens,
  },
});

Finally, make sure you close out the langfuse client

await langfuse.flushAsync();
await langfuse.shutdownAsync();

Mortalife avatar May 09 '25 12:05 Mortalife

This has been fixed in 4.0.0-v4-beta.22

ericallam avatar Jul 09 '25 11:07 ericallam