crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

This model's maximum context length is 8192 tokens.

Open torvicvasil opened this issue 1 year ago • 2 comments

I'm searching for a theme using an agent and the search engine returns a lot of results. How do I limit the amount of tokens? I'm getting this error: BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens. However, your messages resulted in 8217 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

I'm using GPT-3.5 from OpenAI API. Is there a way to overcome that problem or to increase the amount of tokens?

torvicvasil avatar Mar 06 '24 15:03 torvicvasil

try this ?

from langchain_openai import ChatOpenAI

def configure_language_model(model_name):
    """Function to configure the language model. """
    return ChatOpenAI(temperature=0.7, model_name=model_name , max_tokens=8000)

researcher = Agent(
   llm = configure_language_model("gpt-3.5-turbo-0125") 
)

Adamchanadam avatar Mar 09 '24 07:03 Adamchanadam

so it's possible to use langchain_openai configs to pass into the llm parameter? I thought only langchain and langchain-community can be used (in python 3.8 it yells at me when I install langchain and tells to install the langchain-community instead, because of deprecation). Is it also possible to limit the max tokens for local LLMs? what kind of interface the 'llm' param is, it accepts "anything related to llm init params from any APIs and companies"? 😅 kinda cool

aliensouls avatar Mar 22 '24 13:03 aliensouls

That won't work in all the cases because there will be again a distribution between Input Tokens and Output Tokens. Which will again lead to Context length error.

saifpashaTrigent avatar Jun 20 '24 16:06 saifpashaTrigent

Hello, I´m with same issue , Do you have any fix ?

Thanks a lot !

JoseMiracydeSouzaFilho avatar Jul 09 '24 16:07 JoseMiracydeSouzaFilho

Hey John I was able to solve it using the RAG method but It started to hallucinate.

On Tue, Jul 9, 2024 at 9:47 PM JoseMiracydeSouzaFilho < @.***> wrote:

Hello, I´m with same issue , Do you have any fix ?

Thanks a lot !

— Reply to this email directly, view it on GitHub https://github.com/joaomdmoura/crewAI/issues/323#issuecomment-2218121010, or unsubscribe https://github.com/notifications/unsubscribe-auth/BBKEDE36EEHRKGEEOBGFL7LZLQEJ3AVCNFSM6AAAAABEJKIHF6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJYGEZDCMBRGA . You are receiving this because you commented.Message ID: @.***>

saifpashaTrigent avatar Jul 16 '24 18:07 saifpashaTrigent

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Aug 20 '24 10:08 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Aug 25 '24 12:08 github-actions[bot]