crewAI-tools icon indicating copy to clipboard operation
crewAI-tools copied to clipboard

BUG: GithubSearchTool unable to pick up custom LLM

Open Spartan-71 opened this issue 11 months ago • 17 comments

By default GithubSearchTool is using OpenAI : gpt-4o-mini model.

But as per the documentation, the LLM models can be changed as per the different LLM provider.

Even after changing the LLM model it's using OpenAI's gpt-4o-mini.

Sample code from the documentation:

tool = GithubSearchTool(
    config=dict(
        llm=dict(
            provider="ollama", # or google, openai, anthropic, llama2, ...
            config=dict(
                model="llama2",
                # temperature=0.5,
                # top_p=1,
                # stream=true,
            ),
        ),
        embedder=dict(
            provider="google",
            config=dict(
                model="models/embedding-001",
                task_type="retrieval_document",
                # title="Embeddings",
            ),
        ),
    )
)

Spartan-71 avatar Mar 07 '25 16:03 Spartan-71

I couldn't reproduce your issue. Could you add more context such as: logs, errors, package versions..

lucasgomide avatar Apr 27 '25 16:04 lucasgomide

Thanks for testing, weird that it did not affect you. Let me turn on logs and next time it happens I will post.

yqup avatar Apr 27 '25 17:04 yqup

I notice in Crewai, version 0.117.0 there is support for OpenAI GPT 4.1... I think then that this error with unsupported GPT 4.1.. I will keep an eye out for the error message after upgrading to Crewai, version 0.117.0

yqup avatar Apr 28 '25 13:04 yqup

Damn it happened again.. Full error log

LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

Error during LLM call: litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - Error code: 400 - {'error': {'message': "This model's maximum context length is 1047576 tokens. However, your messages resulted in 4731310 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 711, in completion raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 638, in completion self.make_sync_openai_chat_completion_request( File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 145, in sync_wrapper result = func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 457, in make_sync_openai_chat_completion_request raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 439, in make_sync_openai_chat_completion_request raw_response = openai_client.chat.completions.with_raw_response.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 279, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 879, in create return self._post( ^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1296, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 973, in request return self._request( ^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1077, in _request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 1047576 tokens. However, your messages resulted in 4731310 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/main.py", line 1692, in completion raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/main.py", line 1665, in completion response = openai_chat_completions.completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 721, in completion raise OpenAIError( litellm.llms.openai.common_utils.OpenAIError: Error code: 400 - {'error': {'message': "This model's maximum context length is 1047576 tokens. However, your messages resulted in 4731310 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/crewai/dev/orchistra/deck/.venv/bin/kickoff", line 10, in sys.exit(kickoff()) ^^^^^^^^^ File "/crewai/dev/orchistra/deck/src/deck/main.py", line 47, in kickoff action_flow.kickoff() File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/flow/flow.py", line 756, in kickoff return asyncio.run(self.kickoff_async()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/asyncio/runners.py", line 195, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/flow/flow.py", line 770, in kickoff_async await asyncio.gather(*tasks) File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/flow/flow.py", line 802, in _execute_start_method result = await self._execute_method( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/flow/flow.py", line 825, in _execute_method else method(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/src/deck/main.py", line 43, in deck self.state.smt_report = Deck().crew().kickoff(inputs=self.state.inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/crew.py", line 578, in kickoff result = self._run_hierarchical_process() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/crew.py", line 688, in _run_hierarchical_process return self._execute_tasks(self.tasks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/crew.py", line 781, in _execute_tasks task_output = task.execute_sync( ^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/task.py", line 302, in execute_sync return self._execute_core(agent, context, tools) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/task.py", line 366, in _execute_core result = agent.execute_task( ^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agent.py", line 254, in execute_task raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agent.py", line 243, in execute_task result = self.agent_executor.invoke( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 112, in invoke raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 102, in invoke formatted_answer = self._invoke_loop() ^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 160, in _invoke_loop raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 140, in _invoke_loop answer = self._get_llm_response() ^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 210, in _get_llm_response raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 201, in _get_llm_response answer = self.llm.call( ^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/llm.py", line 291, in call response = litellm.completion(**params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1154, in wrapper raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1032, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/main.py", line 3068, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2201, in exception_type raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 282, in exception_type raise ContextWindowExceededError( litellm.exceptions.ContextWindowExceededError: litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - Error code: 400 - {'error': {'message': "This model's maximum context length is 1047576 tokens. However, your messages resulted in 4731310 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} An error occurred while running the flow: Command '['uv', 'run', 'kickoff']' returned non-zero exit status 1.

root@564dfef35e1b:/crewai/dev/orchistra/deck# 1;2c1;2c

yqup avatar Apr 28 '25 17:04 yqup

From your last message, I understand that the error refers to the model’s context window being exceeded rather than “GithubSearchTool unable to pick up custom LLM,” as initially mentioned.

Based on your logs, I inferred that you are using GPT-4.1, which supports around 1M tokens not enough for your use case, which involves around 4M tokens. It may happens if you are adding your entire codebase

According to our internal mapping, the closest model available is Gemini 1.5 Pro, which supports up to 2,097,152 tokens.

lucasgomide avatar Apr 28 '25 18:04 lucasgomide

I apologise for not being clear about the error

As I do not have control over the agent and the website it scraps then I cannot avoid the error.

My point is, CrewAI should not fail because of the error, it should make this as an issue and move on to the next website

yqup avatar Apr 28 '25 18:04 yqup

LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

Error during LLM call: litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - Error code: 400 - {'error': {'message': "This model's maximum context length is 1047576 tokens. However, your messages resulted in 7082326 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 711, in completion raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 638, in completion self.make_sync_openai_chat_completion_request( File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 145, in sync_wrapper result = func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 457, in make_sync_openai_chat_completion_request raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 439, in make_sync_openai_chat_completion_request raw_response = openai_client.chat.completions.with_raw_response.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 279, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 879, in create return self._post( ^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1296, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 973, in request return self._request( ^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1077, in _request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 1047576 tokens. However, your messages resulted in 7082326 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/main.py", line 1692, in completion raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/main.py", line 1665, in completion response = openai_chat_completions.completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 721, in completion raise OpenAIError( litellm.llms.openai.common_utils.OpenAIError: Error code: 400 - {'error': {'message': "This model's maximum context length is 1047576 tokens. However, your messages resulted in 7082326 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/crewai/dev/orchistra/deck/.venv/bin/kickoff", line 10, in sys.exit(kickoff()) ^^^^^^^^^ File "/crewai/dev/orchistra/deck/src/deck/main.py", line 62, in kickoff action_flow.kickoff() File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/flow/flow.py", line 756, in kickoff return asyncio.run(self.kickoff_async()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/asyncio/runners.py", line 195, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/flow/flow.py", line 770, in kickoff_async await asyncio.gather(*tasks) File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/flow/flow.py", line 802, in _execute_start_method result = await self._execute_method( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/flow/flow.py", line 825, in _execute_method else method(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/src/deck/main.py", line 58, in deck self.state.smt_report = Deck().crew().kickoff(inputs=self.state.inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/crew.py", line 576, in kickoff result = self._run_sequential_process() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/crew.py", line 683, in _run_sequential_process return self._execute_tasks(self.tasks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/crew.py", line 781, in _execute_tasks task_output = task.execute_sync( ^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/task.py", line 302, in execute_sync return self._execute_core(agent, context, tools) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/task.py", line 366, in _execute_core result = agent.execute_task( ^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agent.py", line 254, in execute_task raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agent.py", line 243, in execute_task result = self.agent_executor.invoke( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 112, in invoke raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 102, in invoke formatted_answer = self._invoke_loop() ^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 160, in _invoke_loop raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 140, in _invoke_loop answer = self._get_llm_response() ^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 210, in _get_llm_response raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 201, in _get_llm_response answer = self.llm.call( ^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/llm.py", line 291, in call response = litellm.completion(**params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1154, in wrapper raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1032, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/main.py", line 3068, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2201, in exception_type raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 282, in exception_type raise ContextWindowExceededError( litellm.exceptions.ContextWindowExceededError: litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - Error code: 400 - {'error': {'message': "This model's maximum context length is 1047576 tokens. However, your messages resulted in 7082326 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} An error occurred while running the flow: Command '['uv', 'run', 'kickoff']' returned non-zero exit status 1.

yqup avatar Apr 28 '25 18:04 yqup

These are all from the same routine and I do not have a 7m context window. Something weird is going on here

yqup avatar Apr 28 '25 18:04 yqup

From the issues I am seeing.. I think the web scrap in some circumstance is reading a binary rather than text. This would explain the token size maybe.

yqup avatar Apr 28 '25 18:04 yqup

just double checking, are you using the 0.117.0 crewai version? I believe this will not totally address your issue - but might help it. Since we had fixed a bug related to "respect model context window"

lucasgomide avatar Apr 28 '25 21:04 lucasgomide

Sadly this still occurs. Not at often

Here is an extra as I think it is PDF content that is causing the problem i asked GPT what it was.

( • The content you shared looks like corrupted or compressed binary data. • It appears to be part of a PDF file or a compressed file (maybe a corrupted one). • Parts like /Filter/FlateDecode and stream are indicators it’s PDF encoding. )

               ```
                                                                                                                             Z�J5uL{�Z��Z��%
                                                                                                                                                                  ����

�V�i�=���k6����ï �ǿ�Uu9�q�YPNic�*-$p��QFI7�����K%5LA<q$FFV0�|������_�9�B�$$� e{���u�w\�2��xH�|6���h�_7&�� ^�U�T�w��w��P�0>e�۠��m��C)B��b�����O}$�A���ye��bŭ&��بRB� Q/�U�W]�To�Bgx����mC|Ƞ�|E �����ͨ�Nb�y0��vz�Lhc��d&��'�r&T��P]�ғ�29v�u�P[4���js;y (�8�7���"�V�p(�zP_K����4p��-)���/ f�p�1W��K��|{��Cf+ �*�TV�A�Ϊ%Z�;� �VMؚL�u"~p�^Q02��L�+Md �Dz#o��=�- ��F��d�������Ҹ�|��:$9"/<��Y(�J �5+$��F���/h�&%x#?+���i"�7��ƺ�A�ue�AW&:�N�a]GCc ��P�UxH�k�'��b�*%&���H����;#�@�fI��:�+�8�fZ FwsR�̨%$���!�=?B��7�לUFi�k�G=� f�#7׷ěӠt��Z~m�,��\�<�4�T54ԴY8*sN��Di+�J1�u�w���5�4 ����w�8���DEh����4���4��dV��o���nL��f��{nq��$�W N)-�҃\F�r#��dD�n��]��϶e@dMks�4D]x������?��ʕ�������&(����U�� GO �q2����,U�"4�o �WV���f�阣䦵���x �l�J6-�ł���t#�\H���[H�9עbz�mj\�9(�z���8��^�H���zQ�-����q1Y�&D�摾9b5�p媷{]��k�m!�D���!�����Tp�U�5�‥�V���aAfI�cF7�I�F���b�q�[!7;㒢�'��Y�}����QM5�cCZ, ��aQ��Y��̈�9��3�;�[�SiY�7?+Mm��n�lߗu%MLOW ����uN����L��s(êw;��l߉I�u�4�DEh����52�F����xc|�W��X]AG#]|'o�7�+��a����mi���cU7�6nj��Xd���Ɠ���EDg(������k_Jɱ�9����+�B�]��G�����?�v��6I�O��v��6I�O���۟��'�<��\Z-H�J�T�Jf~P�%��1�aV�0�%�V%Ba��tF]��ޕZ���@���jt��}V�Y<b� b�� �ڲ���Gr������Q�s�dž �� �ʿf7� 9sq��Gd�� ��?%W �g�N��W�[���V�]�M�����x���A��R�j#.y��"�qX�Fump ��N��TqFg,Z��߉'�� �fzRLm�+Xj�� ��Z���5:鈐!;!d7��Kj5VN\��I��iV%��ް!����if��#z)P�p���8� d�ș�ͯ;�Q"Č���/q�\I;���)�C�jݣs}j3��)��+D@D@D@� 3+!4^����R�x���n��!����2+8�5 /@n��d�.����6U :���&p���k�B���BN�N�f�AVS�1"I3J��Quch""/]��G�����^����c�=���" " " ��ʑ&E������ kKo�q�p��Y��;���E�̲�'�v��p(��K�fdj'��M��m%0 � �F�X�rnb9�<GD ����Ȥ�������k�U�(ա�S0�P�(�� z���H�M���"s""""/u/��!��gdk^!s쿦��]Y� iK�L�h��V/���^:a��T㍵KE��Vt��r���1�i �T7" �����{���^E��;��9�ڀ>Ϣ"""""��ʑI\��i8cq5�z'~V�H���$�)��� �H�6JN-�M�0a�2��p%�i�B��5-O��ظ]l�24����2"'�2+����U3�0����U靤fg�L���P��/2�iI�!���-[-h~%FQ�8����DE!P"""""""")K�a};)V'��P5�E�{�i4���Bw���V��Ȫ�t��T�K�ƁY2�,À���HrA3WD�}��P���Q�h�QuCp""/]��G�����^����c�=���" " " ��ʑ��i�ub��Kҡ��4:�.8Y/�Z�ݡV1�f!� +7����Zpo8A]����F��vCɑr��3�-{�v��u#x""/]��G�����^����c�=���" " " ��ʑ���$�h�Ʊ�c�I{��vb�TVۃ���G�NF���qV���Ȱ~��57����D[QvC�i�Y �d��0fI;U�->���ђ��"�^�t�8����i�]%4z�m�f�\��M���M��UF4Bd'6+"��sYY,���V���W{��"P��VaG��J�n��k�u��f�]��9�F���H_DT��}�yv�-�����:�Ių-#t�jUeֺ 櫛D�L�DEP�/]��G�����^����c�=���" " " ��ʑ�s� z�~N�=�@�v�"�����{���@g���ʑ�e�Y/ � a0l� +h��h�E��:�����)�6��!��#.�IA͌7�k���Vr�JMG�B�/Y�:V\�;J����y\�U2�DYE��t?'y� z;W�z�~N�=�@�v�� ���?��ʑs������BՎ�ɰě���ѡ��p �Bʦ!�S!�nȐ�U�S��*bq���� T8V�e�*c;�:mi"-b�� &��dv9Feu���c�EC�� �C�w��r��y����#��Gj�>��������ʑr�Т2+,s�Pk f%�ӏu�b2�-cH�A�B"�V�@�?(�R��G4h�;HD���q�<>3\�E6 O��͊�LlVE�޾��1V��P�:^G��m�s�86�������n��RgVz���Y��YL�EU|K���!d4��4c���~���B�(e�ӝ��0ݲ�Ҹv������uc)d �aAz�~N�=�@�v�"�����{���@g���ʑ���fh��5*@xii����ȹ#27� 2~�� P��Tk5آ�瓔Ň��F��'���]�W�����ܯSU-��BqdO� ����X�qh/]��G�����^����c�=���" " " �� <</Filter/FlateDecode/Length 2833/Subtype/Type1C>>stream H�d�}Tg�g 3/"�V���@&X�CQt�֮ ��(VQ�� H � ! E)4D � �\���o����Y�ǵ��٭���g'���g�Ǽ̹�>��$�F$I��F. ]<9 ^�NN�U)�)�h���Z��x�R����wqd��o\�n���Q�DP@�_ �=P�;G[\�8#H28"@���$l�O�{xɧ��N�b}gɗ��q�F�._�Qj����d��G�����ԩ:��'U��ޤ�N���Ī��.Z��>E)�!W(�$Ɠ�D��$ ��?��#�H"TD��#"�%�� G- ��d(F���v�J�����&jɉd6�G$��/�]� �{@�Pմ �N?Fs�{g���#'�,�0ס�a���Q��qT���n�����-�Qv|�ж�|��Z�h�xXB��%��N�)H�ٷ;��;R�0lC�R MPr���#U�!�c������ɟ�g�� �߿�r�E���]L����1�3�eA� �Td����d,d�Xs�ڨR�g�F�AWh�2Ój�P�]�Rp�M��G�dfk�3<�A���ճOz�=\1b������}��?��2��f������ �d�K�L˜�/�b�/+T^�E�P�'y��A��!f+�ײ��R����Z z�.�� ����0X��@��r��T�I�QAq);�y��{�x.������f�R/u��R�S{ O11}�G�|;��:kQG��R����F�o���MH��ɉ�b����~���_n}���D3/����b��|������s�;ނ�������[aa?�� �����L/Mt@���U�p7�Mv@�, ^�Ьb݁S�j9X�}��U6��_�+�e���ӵ�;��/m2�"m�@@�<�i%�]�["��� M&ڣ"������o؃��u���A�t�<�� �<��������Ux�ʦ^�����T�c��>�M?���Y�dxrO��m���3�<d&�<�c2XX r�f;.z���� p�^/\�w�yaGڞ��Lܯ�ۿ_C"x��[8�fx�����n ��T�z<�[��^��v����qe3(����?Y������Y��]w�f?U��R�b��M;t��Cɓw�B���Lҟn* �s5���G�I�l�tq��>:5-&V´����$�s�]�m:s������ےuj��^����[��CN=0V�x��Z:[M������I����E���vI@�DH'c�%�aq ~6!��-���@�������S�=:����^C��7�Os�f�F�)#.uS)��&�������&7rx�%����˭�dLy�w�C�� ��*\E�s��N1�t��<3Q2�5��6nsmg�Pmá�c��Ss1��ISr���V~V5���&��2�}y����\�^j�bm�*����￶�fꍜ~�>U%��K��kl�^}E��&��>���46WC���_���_f~�� \j}.{���r�c"V fD�P 7���I�Yg3a@=��8����(�31��q�K�&���R�I�7��zt��hZ��������2� υq"0 ����e��T$-�zF�1W���O�^��\�Y�7��ݟ�7n'���V�i�g�0���� R� ����[c� {�#��7:C��Ve��@':]�x�"={(>��UH�-WQ 3@��(ʱVԣ��ݳ�C!0���� =���sW��x��Ps�;݃ix������LSS�USTT��u���g8�2��2?�Q#��&��!��2�� �'I�s��p��%���yV��q�jg8l��EL�͋�^ 6/��-�����W�� �]�� ۍt�m������������i��7;rc� J��AhHth��fN�ɅҠp��t�B�t�kHSadz�s,m��S����h�7B�/��d�b�ټ�n��&^ڦG=��%^R���4�N��YsN���� �E���s�|�=ˠ/~y��<<����?�%��7���4��[������l�3�d3�D�g��y:MG�����B����U�d��)�;I�;�s�[4��B�N@&���!!x����M�-��G�3�k ���{������OK��8>�U]@�W��7�^�Qy�� ��D�N�ؤiC��0�L� 7���Cm�������r�R�2n%ʉ��= ���6����� Ć�A��]�Z��ː����ϩ� 3�h�59�S���Xygˍ�� Ec���rer|�Bѭ���~�P�֏���,���8=!��+I^�I��9�[k UU�,W�|��7lE��<����xL�X��O�=l�ƺ\�Øh�����L<#� ��c[���P�xjS���8��G�<�R����ʻe�ec ,�|��;�=ti� �W�_�����b}4�[*��Aw�4=:0���Wi|Vׯ)����L�<�׉�8���ƶViowۋf9˛iܥ3�= �>�Wt�=!Џmi��_Z���?�G�J���������t�N^yA�b�d~��8���ϝ3͌�y����J}wyYb,_v�ǡW;�a�kA7C4��u��]��H�jb>hy����c�3N�� <</Filter/FlateDecode/Length 364>>stream H�\��j�0 ��~ �CI�&v !0�r�����t��1Nz��O�J3$��%}�vR�����L��h�e�;o�����wb�Jۛ���3�^$��,� C��Q�L>ps��"���x��Hނ�л�|����L���?0���F����a��ֿ��$��k�����1�/�s� Ӹ�r3f�0��@h�D��Q�⌣���t�i��|�A)o68!����62N�)sJ�1gĚY���\'�u���L�s����;��b�"��E1�cT�a�"�ʙsb�*��b���E���{Sԛf�&�f�&�f�&�f�&�V̊��8с�O��_�|ܫ���W�Q�K�������K̢O�����r �� <</Filter/FlateDecode/First 301/Length 3878/N 33/Type/ObjStm>>stream h��[ks�������N�^�}1�Ɍd�N�8�k�I��ӁEHBM�2/��_���,(�$H�U��h ����b�{�y.2UY���t�5)�l �\D��'��/��� k- ����PH��6�PΕ��e�B�(��U�Sx�, ��>���P��/J(�"D�[���-S�筪HF��A&�a�.R�Ѳ6E�F�U� �����}���jmٯ�(%W�� ml�J�훲�h ��Ѓ1(YLџ��N�y���|:j'��vr2��w�h/.�Y39o濘�.�q�l~5�-�rҾ<-K5�4�&�ku�N�s��;�ϗ��z{{ӨUc_|��N���d�wH�y}�U�^^-���z~U�Κ���D�o��F���q=S7͌����y=���7�\��?u3F;����Z]��\55��+��f6eoj�Z\͚F]L�3u������j����2h5iQ�|:�Nԉ:U�� ��z�^�������z��S߫���7�L�UU?�����o��VԹ)t�.Օj�?�G5~�D]����I��\aԯ�_곺U�V���E�_���8�f�� {��ng�W��޺��|[����Ζ �����_'-�lp�I�ab��W�_pA0& 6d(��mf��J� ��;Z6a1�V��{-c�;c�,�G,�V���j'\�P�H���m�)�����P'�Z���G���*��%@D@}9G##����~"����0 h������n���1�s~�~�y���{��'�Oޅ}�>�R7��c ��N����Q?�����?����s7ts��n��Z��K��Gu�r�\ɞ-����W� x-˓R]�"�E}.Љ*�;���C�2U=>�#�o�������r�h�G�v:z$��6��|Z�>-���ş�T����d�aV�l2����9�ϧ�fa쥋��z~޶�v<\⅛�!�H�=V������:�X'�k9͚�v������F��e9�������2�������՘�:��㻏Z|V�� M�pOlm�0�/qGsj9@xPO+�s=�,�x��kw��_ͦ@y�����x�י�2��C|�~�fz]O�?�� �����|5�~�C�Q�ݤ@�M�n���C�a�]"���<� ��P����e��u���DMly�-�]˿������q|NS}��6�����l��#�Zŷ�JҦ��*�HRj0����Їl���p]��H�:S+�� J�I���@EZƻ���2���0����9��F��Р��D)·1��d�0j��@�Δ".<f�%�e��AR�@�a�$��Z{O#��K̤̰)�(��H��qǝ�f�м٤8�v��)���1g�We�;��L���rV�"ƾ?ho�m��\icsف�76�q{6W�/��4C��<χ|� �-�� >e�䉁B���)�|)�}��s��r墵��i����^�E�G�N�lPl1�-�;�zТ��nQ����n���Yy�A}ǠU�1�P=���!�qC�� �꠯ ���q��߱h��EC:NJcC ��1�& j�Z�R�һ�!�z~���4͋&/���D�-WF��ڞ����!����O�I.�,��B^t�z�/S)��N�{ �YZ߲���W�u]�-�~ ��Ǹ-r�=N�䔌�=N�.��_s ݂yW���U7����-g5� 2��ҧ ����-R��h|�YX��#��J�9����1c~�5 E�)�ݬ^x��ކMk��#��a�]ʉ�ED��T�z��6rX�[ע�H6������]��Ew i'��rk����2���Uu�I�ܪ;��2b!oXLJ�<�6�w���@�T��>x�r� O�ƛ}� Ɲ^t�wU>Bܨ��,��&M2�^�ds�ыYe�i��a��+JcI��O ����D�+�a�w��uk�/Ck�v--%��T"N ��$�ߒ�6����T�s In6J������ߌ��5��z?{u�������a� ����I�E� 1����2}�6e���v,ĸNY��:���.��U�!����NGȓ��;v��yd�X����_v�mC�~��6�}Ty۸f�p$��=�L��(z݌���:����'@3�+�c�G颮���Ms�0���]��2�_9ED��ϩD��:P����z9J�� 9[<I_����0��U�0�H#���<2Q|G+O��gK���Ng��?����T�A��Mk�N�«�g�%�D��s"�Z���a�J� �r�[#e��x�dP�����5h�W�Ga�ρ�4e����(*��GH�-��Ij��{���m9��z!�m��C�S��i�N�t5�}�,�˞^�Ų �. ��X��{1������nm=��@�g����Q����q����T�'[Q � �],� �lN�2,Pq@3C�� \%�g�{�8��G�|r�����$���U|x���r垮% P�=(�uyD����ȭ� ��h*M#Q��=�|,��3��Ӆԗ"���m���ʈʪ��4ɓ�5�Bi�����E�S�)���=GR3U��S�c"E��/�4�%���ƴ0�.�"3���>�瘶:�@,�(a��2�C���3 S���A��W�<+N����� �C��Ҭy��h%u�Y������x� YYv�Z0g��]�]����'��s����������7J�����|����j���O��<2����%����r�hoƷ�!�5��NJ�G����0h��i.��n�t��i�=���L��m����C����nsȆ��X��CB��>��'�b�%���*F��FfϽ �Ѷ��-%��JR����prvev��*���>�9��=z�H"U��H�3L[$%M)f��Ȉ��h~�VN�V�r�z���x@?�܌�35Σw��I2j��A[��;~���Vr���؝��օ&����^ �}|�Ӓ��[�ە��~�t�4�5�4Itt�'a�t}2 ��Z�zJ�wB�;�98�%���ɒ�����^J�v����ʆ�g ����C��8�����sM�z����Iې��u��i�M��%�B��GtO�gX�97ʓ�4dSY�[��w�f�byDP/�WWM��Dƿ��8T�1��%�x���"�dץ十}�� f��C"[�T,%M�kk�%���s�T� �)��|^�N�GP��l�i�tl�]em�8��k����mg���W_c�r����~2��n��۵f0� E���L��-�e��{�ak�[����,��|:ml5��{v��jw,c�����N���+P���sp%#zB��8���4�����U��: �&��F�C21�Cm�t�U��R������/;c������z�����Y�������� ��I����z�u'MG����0�y-Nor̓�����s���y�R�&��o���O�C�]]�;ܳgoϦ��E�q�7�_�����?�7�/�[�]�>�jm<e � <</Filter/FlateDecode/First 119/Length 353/N 15/Type/ObjStm>>stream hޔ�9NAE�R7���7�rBHbYd��#C��)O�q!��'�z��򭢔H�Ȋ#+; �� I��JҚ�?�:;���&ʪN�����w��%q��>�ԫ��|�X�Y6������<ݿ>O$�/j�����1O���|�nϷ�|;�m/4X������?�K�����Ӗ7/�Q �zxzx8��T��9f���-�Q�� �S��i +QF(Tp q���U����ΊM@,��q'R��.I �a�$óp1�p���p,\�A!�_X A0Dq�8���t�S� <</DecodeParms<</Columns 5/Predictor 12>>/Filter/FlateDecode/ID[<D06CA4E2D48546A0AED14104E2A21B7A><AA561C5355065E4E87163947EA7F96D7>]/Length 868/Root 339 0 R/Size 338/Type/XRef/W[1 3 1]>>stream l�w3��̰�)-��=�u�Y�So ���-І��;� �+�e�Ab��%�a�>H�]0�B��¢�N=DtS�H��EV=������R���>��s�|g�9sfV�iR�4�����&7 �y�M���w1�ե8jy��K���,E]!�_�Kr?�,v{t]���_�oe��t: �ÄH�sƘ�A~3;Ø���H�^pn#5;f��k҂���N� }�Rz�K�yI���*�B�:̪�<��������<3yί�m[�"�U�G��S��$O�z��GC��c���:s��4cwyn>=oq�Y�a�3/��]_� �e/V}�� &qr�����s�]C��}�=���oA�3�ny������fZ�z��V�}�9-�W{�a����U��K���|���j�(V�˼��]�1����-<�w��O��8G�T�t���gz�����l��(C�� sy��ӥ��Z>�١�2���:�NY�o�⬆�r���"ϝ�$)Ok�S�o�?�!�m��ꃅ�����ہ^��F9u9�bj�6�c�^y���(Odi7�]��J�F+�YO�7��@�Λ2�ȕ��5}��s� �+��h=��u������J��dE�|���ɤXhߕ<���d����rrR������g�/��÷��Ňo����%����א�SǷ�XA;g�ě��_Nٝ����L�&/�����-~��R��,x��4��=�D#���_���ȿ��K�9��� startxref 116 %%EOF

^[[?1;2c^[[?1;2c

LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

Error during LLM call: litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - Error code: 400 - {'error': {'message': "This model's maximum context length is 1047576 tokens. However, your messages resulted in 4336404 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} Traceback (most recent call last): File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 711, in completion raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 638, in completion self.make_sync_openai_chat_completion_request( File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 145, in sync_wrapper result = func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 457, in make_sync_openai_chat_completion_request raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 439, in make_sync_openai_chat_completion_request raw_response = openai_client.chat.completions.with_raw_response.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_legacy_response.py", line 364, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 279, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 879, in create return self._post( ^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1296, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 973, in request return self._request( ^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1077, in _request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 1047576 tokens. However, your messages resulted in 4336404 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/main.py", line 1692, in completion raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/main.py", line 1665, in completion response = openai_chat_completions.completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 721, in completion raise OpenAIError( litellm.llms.openai.common_utils.OpenAIError: Error code: 400 - {'error': {'message': "This model's maximum context length is 1047576 tokens. However, your messages resulted in 4336404 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/crewai/dev/orchistra/deck/.venv/bin/kickoff", line 10, in sys.exit(kickoff()) ^^^^^^^^^ File "/crewai/dev/orchistra/deck/src/deck/main.py", line 59, in kickoff action_flow.kickoff() File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/flow/flow.py", line 756, in kickoff return asyncio.run(self.kickoff_async()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/asyncio/runners.py", line 195, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/flow/flow.py", line 770, in kickoff_async await asyncio.gather(*tasks) File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/flow/flow.py", line 802, in _execute_start_method result = await self._execute_method( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/flow/flow.py", line 825, in _execute_method else method(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/src/deck/main.py", line 55, in deck self.state.smt_report = Deck().crew().kickoff(inputs=self.state.inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/crew.py", line 576, in kickoff result = self._run_sequential_process() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/crew.py", line 683, in _run_sequential_process return self._execute_tasks(self.tasks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/crew.py", line 781, in _execute_tasks task_output = task.execute_sync( ^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/task.py", line 302, in execute_sync return self._execute_core(agent, context, tools) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/task.py", line 366, in _execute_core result = agent.execute_task( ^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agent.py", line 254, in execute_task raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agent.py", line 243, in execute_task result = self.agent_executor.invoke( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 112, in invoke raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 102, in invoke formatted_answer = self._invoke_loop() ^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 160, in _invoke_loop raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 140, in _invoke_loop answer = self._get_llm_response() ^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 210, in _get_llm_response raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 201, in _get_llm_response answer = self.llm.call( ^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/crewai/llm.py", line 291, in call response = litellm.completion(**params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1154, in wrapper raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1032, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/main.py", line 3068, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2201, in exception_type raise e File "/crewai/dev/orchistra/deck/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 282, in exception_type raise ContextWindowExceededError( litellm.exceptions.ContextWindowExceededError: litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - Error code: 400 - {'error': {'message': "This model's maximum context length is 1047576 tokens. However, your messages resulted in 4336404 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} An error occurred while running the flow: Command '['uv', 'run', 'kickoff']' returned non-zero exit status 1.

root@628158b89d2f:/crewai/dev/orchistra/deck# crewai --version crewai, version 0.117.1 root@628158b89d2f:/crewai/dev/orchistra/deck#

yqup avatar Apr 29 '25 09:04 yqup

@yqup thanks for sharing the logs! You’re using the latest version, which likely means we’re attempting to summarize but your content might be too large. I have a couple more requests:

  • Would you mind sharing your codebase?
  • Could you also host and share the PDF? It’s currently unreadable on my end.

lucasgomide avatar Apr 29 '25 13:04 lucasgomide

I do not know where the PDF is... It is coming from an online search from ScrapeWebsiteTool(). How would I track that down?

This is an example of the task that is causing the issue. So a pretty standard and simple task. Seems to happen randomly.. I will keep looking for a pattern

	@task
	def case_study_writer_task(self) -> Task:
		return Task(
			config=self.tasks_config['case_study_writer_task'],
			tools=[SerperDevTool(), ScrapeWebsiteTool(), CSVSearchTool()],
			output_file=os.path.join(report_dir, 'case_studies.md'),
		)

yqup avatar May 01 '25 08:05 yqup

I'm a bit confused about the issue you’re raising. Is it actually related to the open issue titled “GithubSearchTool unable to pick up custom LLM”? If not, it would be helpful to create a new issue to keep things organized and easier to track for others.

From your recent messages, it sounds like we’re sending too much context to the LLM - which might be the real issue here. One potential cause could be a PDF that’s being sent to the model as plain text (I believe). That’s something that could potentially be optimized with better prompt engineering.

Let me ask again: would you mind sharing your full code snippet? I think that would help us move forward more productively.

lucasgomide avatar May 02 '25 21:05 lucasgomide

@Spartan-71 as OP, are you still facing the related issue?

lucasgomide avatar May 02 '25 21:05 lucasgomide

I cannot share the entire project as it is for a client.. However I am building another and if i get the same issue I will give you access. And if it does I will create a new issue so not to confuse.

yqup avatar May 03 '25 06:05 yqup

@yqup No worries at all! I'll be happy to assist you.

lucasgomide avatar May 06 '25 13:05 lucasgomide