crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

[BUG]

Open gergirod opened this issue 1 year ago • 8 comments

Description

crew result token usage always 0

total_tokens=0 prompt_tokens=0 cached_prompt_tokens=0 completion_tokens=0 successful_requests=0

Steps to Reproduce

This is how i setup the crew

// this is how I create the agent

def _create_researcher(self):
        return Agent(
            role='Data Researcher',
            goal='Gather comprehensive information about the target profile',
            backstory="""You are an expert in data gathering, specializing in finding comprehensive information online.
                        You know how to craft precise search queries to find specific professional profiles and validate connections.""",
            verbose=True,
            tools=[getSerperTool()],
            allow_delegation=False,
            llm=self.agents_llm
        )

// this is how I create the crew

def _create_crew(self):
        agents = [self._create_researcher(), self._create_info_gatherer()]
        tasks = [self._create_research_task(), self._create_gather_info_task()]
        
        return Crew(
            agents=agents,
            tasks=tasks,
            verbose=True
        )

//this is how I create the LLM

def create_llm(api_key: str):
    return LLM(model=os.getenv("OPENAI_MODEL_NAME"), api_key=api_key, temperature=0.0)
     ```
     
     
	self.set_llm(openai_api_key)
    
    self.crew = self._create_crew()

    inputs = {
        'profile': fullname,
        'company': company,
        'language': language
    }

    result = self.crew.kickoff(inputs=inputs)
        
        

### Expected behavior

i expect that the token usage return the proper token consume by the crew

### Screenshots/Code snippets

This is how i setup the crew

// this is how I create the agent

def _create_researcher(self): return Agent( role='Data Researcher', goal='Gather comprehensive information about the target profile', backstory="""You are an expert in data gathering, specializing in finding comprehensive information online. You know how to craft precise search queries to find specific professional profiles and validate connections.""", verbose=True, tools=[getSerperTool()], allow_delegation=False, llm=self.agents_llm )


// this is how I create the crew

def _create_crew(self): agents = [self._create_researcher(), self._create_info_gatherer()] tasks = [self._create_research_task(), self._create_gather_info_task()]

    return Crew(
        agents=agents,
        tasks=tasks,
        verbose=True
    )

//this is how I create the LLM

def create_llm(api_key: str): return LLM(model=os.getenv("OPENAI_MODEL_NAME"), api_key=api_key, temperature=0.0)



         ```
         
         
		self.set_llm(openai_api_key)
        
        self.crew = self._create_crew()

        inputs = {
            'profile': fullname,
            'company': company,
            'language': language
        }

        result = self.crew.kickoff(inputs=inputs)

Operating System

macOS Monterey

Python Version

3.10

crewAI Version

0.80.0

crewAI Tools Version

0.14.0

Virtual Environment

Venv

Evidence

Uploading Screen Shot 2024-11-14 at 2.20.14 pm.png…

Possible Solution

None

Additional context

OPENAI_MODEL_NAME=gpt-4o-mini

gergirod avatar Nov 14 '24 17:11 gergirod

any updates on this @joaomdmoura ?? could this be related because I'm not using the decorators ?

gergirod avatar Nov 23 '24 19:11 gergirod

Hi @gergirod , I am trying to reproduce your issue here, using Crewai 0.80 and Python 3.10. I came up with this crew similar to yours:

#!/usr/bin/env python
import sys
import os
import warnings
from crewai import LLM, Agent, Crew, Task
from test_metrics.crew import TestMetrics
from crewai_tools import SerperDevTool

warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")

agents_llm = LLM(
    model=os.getenv("OPENAI_MODEL_NAME", "gpt-4o-mini"),
    api_key=os.getenv("OPENAI_API_KEY"),
    temperature=0.0,
)

def _create_researcher():
    return Agent(
        role="Data Researcher",
        goal="Gather comprehensive information about the target profile",
        backstory="""You are an expert in data gathering, specializing in finding comprehensive information online.
                    You know how to craft precise search queries to find specific professional profiles and validate connections.""",
        verbose=True,
        tools=[SerperDevTool()],
        allow_delegation=False,
        llm=agents_llm,
    )

def _create_reporting_analyst():
    return Agent(
        role="Reporting Analyst",
        goal="Generate a report based on the research",
        backstory="""You are an expert in data analysis, specializing in generating comprehensive reports based on the gathered information.
                    You know how to structure and present data in a clear and concise manner.""",
        llm=agents_llm,
    )


def _create_research_task(agent):
    return Task(
        description="Conduct a thorough research about {topic}",
        expected_output="A list with 10 bullet points of the most relevant information about {topic}",
        agent=agent,
    )

def _create_reporting_task(agent):
    return Task(
        description="Generate a report based on the research",
        expected_output="A detailed report in markdown format",
        agent=agent,
    )


def _create_crew():
    researcher = _create_researcher()
    reporting_analyst = _create_reporting_analyst()
    tasks = [_create_research_task(researcher), _create_reporting_task(reporting_analyst)]
    return Crew(agents=[researcher, reporting_analyst], tasks=tasks, verbose=True)


def run():
    inputs = {"topic": "AI LLMs"}
    crew = _create_crew()
    crew.kickoff(inputs=inputs)
    print("Usage metrics:")
    print(crew.usage_metrics)

if __name__ == "__main__":
    run()

However if run using uv run crewai run, I get usage metrics as usual. e.g:

Usage metrics:
total_tokens=12548 prompt_tokens=10657 cached_prompt_tokens=2816 completion_tokens=1891 successful_requests=6

Would you mind giving more details? Or maybe spotting something different you are doing on your side that differs from this example that help us to try to reproduce the issue.

Is your crew running without any issues, outputting the results as expected without any bumps?

thiagomoretto avatar Nov 27 '24 21:11 thiagomoretto

Hello @thiagomoretto , thanks for replying. I found some differences I was trying to get the token usage from the crew output and here I can see that you get it from the crew itself.

even though I notice that difference I try it the way you are doing and I'm getting all with 0

total_tokens=0 prompt_tokens=0 cached_prompt_tokens=0 completion_tokens=0 successful_requests=0

my main difference also is that my crew is wrapped in a class

class ProfileInsightCrew:

like that could be that I'm not using the decorators?

the output if fine I'm getting the results I want but not the token usage

thanks

gergirod avatar Nov 27 '24 22:11 gergirod

like that could be that I'm not using the decorators? Hey @gergirod,

I don't think so. In my example, I am not using decorators as well.

I hypothesize that you are getting the UsageMetrics reference before it gets computed.

This is the step where the usage metrics are computed after kickoff: https://github.com/crewAIInc/crewAI/blob/main/src/crewai/crew.py#L566-L571

But there's a caveat: It recreates the object, so if you have an old reference, you might end up with zero usage metrics.

So, could you double-check if you are getting crew.usage_metrics after kickoff?

What I mean is, if you are doing something like this:

usage_metrics = crew.usage_metrics
crew.kickoff()
print(usage_metrics)

The metrics will be zeroed since the reference is replaced.

Whereas:

crew.kickoff()
usage_metrics = crew.usage_metrics
print(usage_metrics)

It should works.

If this is the case, the above can be a solution for you. Still, I believe there is a small improvement in the lib like instead of replacing the object entirely, just resetting the metrics to ensure the values are zeroed before summing all, but I am not sure this has other side effects.

thiagomoretto avatar Dec 04 '24 13:12 thiagomoretto

@thiagomoretto this is how I'm doing it

result = self.crew.kickoff(inputs=inputs) gather_info_task_output = result.tasks_output[1].json_dict

    response = {
        'profile': gather_info_task_output,
        'total_tokens': result.token_usage.total_tokens if result.token_usage else 0
    }
    

so here you can see that I'm trying to other the token_sage from the crew output. the task outputs are ok. so you suggest that when I get the token_usage actually the crew didn't finish?

gergirod avatar Dec 04 '24 14:12 gergirod

ok, curiously, if I do the same, I get the usage correctly. Using gpt-4o-mini here.

Which model are you using, gpt-4o-mini as reported on the issue? The token usage is captured using LiteLLM callbacks. Just checking if there is something odd related to a specific model.

Have you tried the latest 0.83 version? Just in case.

thiagomoretto avatar Dec 04 '24 16:12 thiagomoretto

@thiagomoretto I'm using gpt-4o-mini and yes I'm using 0.83.0 version

gergirod avatar Dec 04 '24 16:12 gergirod

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Jan 04 '25 12:01 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Jan 09 '25 12:01 github-actions[bot]

@gergirod where did you end up with this issue? I am facing the same issue currently

mohsin2596 avatar Jul 10 '25 14:07 mohsin2596