crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

local LLM cannot use Tool

Open HomunMage opened this issue 1 year ago • 9 comments

import os
from crewai import Agent, Task, Crew, Process
from langchain_community.llms import Ollama

from crewai_tools import BaseTool

# Initialize the OpenAI LLM with the specific model
llama3 = Ollama(model='llama3:8b')

class FileWriterTool(BaseTool):
    name: str = "FileWriter"
    description: str = "Writes given content to a specified file."

    def _run(self, filename: str, content: str) -> str:
        # Open the specified file in write mode and write the content
        with open(filename, 'w') as file:
            file.write(content)
        return f"Content successfully written to {filename}"

# Set up the FileWriterTool
file_writer = FileWriterTool()

# Define the agent with a role, goal, and tools
researcher = Agent(
    role='Knowledge Article Writer',
    goal='Create and save detailed content on professional domains to a file.',
    backstory="Passionate about crafting in-depth articles on Game Design.",
    verbose=True,
    allow_delegation=False,
    llm=llama3,
    tools=[file_writer]
)

# Create a task that utilizes the FileWriterTool to save content
task1 = Task(
    description="Write and save an article about game design using the FileWriter tool.",
    expected_output="A file named 'game_design_article.txt' with the article content.",
    agent=researcher,
    tools=[file_writer],
    function_args={'filename': 'game_design_article.txt', 'content': 'Detailed content generated by LLM about game design.'}
)

# Instantiate the crew with a sequential process and execute the task
crew = Crew(
    agents=[researcher],
    tasks=[task1],
    process=Process.sequential,
    verbose=2
)

# Execute the crew tasks and print the result
result = crew.kickoff()
print("######################")
print(result)


local llm such llama3 or gemma all cannot save to file. (of course after run ollama serve) however, this flow success when use GPT4 and set my key.

HomunMage avatar May 03 '24 05:05 HomunMage

Sometimes the model is just not capable enough, that said I'd recommend trying a new version we are testing 0.30.0rc5, you can install it with: pip install 'crewai[tools]'==0.30.0rc5

You can now on this version also use the prompt format used to train the model with something like:

Agent = Agent(
    role="{topic} specialist",
    goal="Figure {goal} out",
    backstory="I am the master of {role}",
    system_template="""<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>""",
    prompt_template="""<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>""",
    response_template="""<|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>""",
)

I'll try run your example locally myself as well tomorrow, but just decided to share some context that might help :)

joaomdmoura avatar May 03 '24 05:05 joaomdmoura

i want try with phi3

francescoagati avatar May 04 '24 05:05 francescoagati

Your script is not working for me, but did do "better" with the system template. After a bit of digging around I believe the problem may come from Langchain's inability to pass the "raw=true" to Ollama. I believe this is necessary to allow ollama to override the template in its Modelfile.

Please also consider I am not sure about this at all, but there seems to be no info, issues or PR's on this at all.

TheBitmonkey avatar May 04 '24 14:05 TheBitmonkey

it seems works right with phi3

HomunMage avatar May 07 '24 12:05 HomunMage

Can confirm its working a bit with Hermes2 Pro.... seems llama3 just doesn't get us man.

TheBitmonkey avatar May 07 '24 14:05 TheBitmonkey

Also works with Hermes 2 Llama 3. and Hermes Solar 10.7B. And also tested to be working with Dolphin 2.8 Mistral 7B. However it does take multiple attempts. There was one that did really well, I’ll find it tomorrow.

hassamc avatar May 12 '24 08:05 hassamc

Hey Folks, on this version there are a couple features that will give better support to local models, I'm putting together new docs on those to help out!

joaomdmoura avatar May 12 '24 16:05 joaomdmoura

it seems works right with phi3

Can you share your environment + code? #621 Uses this code without success.

noggynoggy avatar May 15 '24 13:05 noggynoggy

it seems works right with phi3

Can you share your environment + code? #621 Uses this code without success.

my code is at first talk https://github.com/joaomdmoura/crewAI/issues/554#issue-2276962263 (just replace llama3 to phi3 and env both windows and linux )

need severals times to try. randomly success or fail.

HomunMage avatar May 15 '24 16:05 HomunMage

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Aug 17 '24 12:08 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Aug 23 '24 12:08 github-actions[bot]