System Message causing no answer from Assistant
Hello all,
I´m trying to use the system message as described below. Evertytime I use it I don´t have any answer from the LLM.
messages = [
{'role': 'system', 'content': f'"{self.role}"'},
{'role': 'user', 'content': f'"{message}"'},
]
return await client.chat(model=model, messages=messages,)
I was trying to find if there is any issue reported but I didn´t found it. Can someone help me on this ?
Thanks
For general use as shown in most examples, you should have a local ollama server running to be able to continue.
To do this:
- Download: https://ollama.com/
- In your terminal, run an LLM:
- See available LLMs: https://ollama.com/library
- Example:
ollama run llama2 - Example:
ollama run llama2:70b
- If you want to use a non local server (or a different local one), see the docs on Custom Client
This is verbage as part of the PR: https://github.com/ollama/ollama-python/pull/64
It is also worth noting that you are using an await. Are you using an async client?
For a non async client you do not need await:
import ollama
response = ollama.chat(model='llama2', messages=[
{
'role': 'user',
'content': 'Why is the sky blue?',
},
])
print(response['message']['content'])
For an async client, you should use an await.
import asyncio
from ollama import AsyncClient
async def chat():
message = {'role': 'user', 'content': 'Why is the sky blue?'}
response = await AsyncClient().chat(model='llama2', messages=[message])
asyncio.run(chat())
@connor-makowski Thanks for your feedback. I used both solution (sync and Async clients). the problem is that when assuming a message with Role: System, LLM is not giving answer.
what model are you using?
your snippet doesn't stream. is it possible the llm is responding but hasn't completed yet? in this mode, ollama will wait until it has the full response before returning to the call. this could look like non-response if it's also generating tokens at a slow rate (due to hardware limitations)