bug: Differences between OpenAi and Gemini
Did you check docs and existing issues?
- [x] I have read all the NeMo-Guardrails docs
- [x] I have updated the package to the latest version before submitting this issue
- [ ] (optional) I have used the develop branch
- [x] I have searched the existing issues of NeMo-Guardrails
Python version (python --version)
3.10.15
Operating system/version
MacOs 14.6.1
NeMo-Guardrails version (if you must use a specific version and not the latest
No response
Describe the bug
Hi, thank you for you work. I noticed that i obtain different results with the response generator ("..." in Colang v2) when i use different providers. If I use OpenAi, I obtain the desired response, while if i use Gemini, I receive a syntax error. Simply by changing ONLY the config file. I am testing this using two different nemo servers: action and non action one. But they work fine and locally I have the same problem. For the config files, i used the following ones:
colang_version: 2.x
models:
- type: main
engine: vertexai
model: gemini-1.5-pro
colang_version: 2.x
models:
- type: main
engine: openai
model: gpt-3.5-turbo-instruct
Using prints and logs as debug, I noticed that the error is in the line $output = ..."'{$ref_use}'" I have tried also variation of way to send the input (for instance: $ref_use.transcript), but they did not work. This is the error with gemini:
# This is the current conversation between the user and the bot:
user action: user said "Hi! Can you do the spelling of the following name <PERSON>? Thanks"
# 'Hi! Can you do the spelling of the following name <PERSON>? Thanks'
$output =
/Users/vrige/Library/Caches/pypoetry/virtualenvs/llm-gateway-C7tEgGBA-py3.10/lib/python3.10/site-packages/proto/message.py:389: DeprecationWarning: The argument `including_default_value_fields` has been removed from
Protobuf 5.x. Please use `always_print_fields_with_no_presence` instead.
warnings.warn(
LLM Completion (d9093..)
user intent: user asked for spelling of a name
17:47:27.685 | Output Stats None
17:47:27.685 | LLM call took 2.36 seconds
WARNING:nemoguardrails.actions.action_dispatcher:Error while execution 'GenerateValueAction' with parameters '{'var_name': 'output', 'instructions': "'Hi! Can you do the spelling of the following name <PERSON>? Thanks'"}': Invalid LLM response: `user intent: user asked for spelling of a name`
ERROR:nemoguardrails.actions.action_dispatcher:Invalid LLM response: `user intent: user asked for spelling of a name`
Traceback (most recent call last):
File "/Users/vrige/Library/Caches/pypoetry/virtualenvs/llm-gateway-C7tEgGBA-py3.10/lib/python3.10/site-packages/nemoguardrails/actions/v2_x/generation.py", line 813, in generate_value
return literal_eval(value)
File "/Users/vrige/.pyenv/versions/3.10.15/lib/python3.10/ast.py", line 64, in literal_eval
node_or_string = parse(node_or_string.lstrip(" \t"), mode='eval')
File "/Users/vrige/.pyenv/versions/3.10.15/lib/python3.10/ast.py", line 50, in parse
return compile(source, filename, mode, flags,
File "<unknown>", line 1
user intent: user asked for spelling of a name
^^^^^^
SyntaxError: invalid syntax
The input message is the following one:
'Hi! Can you do the spelling of the following name <PERSON>? Thanks'
Finally, the simpleAction makes a custom validation of the output, but it works fine.
I need to work with Gemini, is there a way to overcome this problem?
Steps To Reproduce
import core
import llm
flow main
user said something as $ref_use
$output = ..."'{$ref_use}'"
await analyse_output(ref_use=$output) as $ref_act_out
flow analyse_output $ref_use
$result = await simpleAction(inputs=$ref_use)
if "Error: " in $result
bot say "I do not like it. Change your input"
else
bot say $result
UPDATE: I tried also the following simplified flow, but I got the same error.
import core
import llm
flow main
user said something as $ref_use
$output = ..."'{$ref_use}'"
Expected Behavior
The OpenAi response is the following one: Sure, the spelling of the name << PERSON >> is <US_ITIN>. Is there anything else I can assist you with?
Actual Behavior
This is the error with gemini:
# This is the current conversation between the user and the bot:
user action: user said "Hi! Can you do the spelling of the following name <PERSON>? Thanks"
# 'Hi! Can you do the spelling of the following name <PERSON>? Thanks'
$output =
/Users/vrige/Library/Caches/pypoetry/virtualenvs/llm-gateway-C7tEgGBA-py3.10/lib/python3.10/site-packages/proto/message.py:389: DeprecationWarning: The argument `including_default_value_fields` has been removed from
Protobuf 5.x. Please use `always_print_fields_with_no_presence` instead.
warnings.warn(
LLM Completion (d9093..)
user intent: user asked for spelling of a name
17:47:27.685 | Output Stats None
17:47:27.685 | LLM call took 2.36 seconds
WARNING:nemoguardrails.actions.action_dispatcher:Error while execution 'GenerateValueAction' with parameters '{'var_name': 'output', 'instructions': "'Hi! Can you do the spelling of the following name <PERSON>? Thanks'"}': Invalid LLM response: `user intent: user asked for spelling of a name`
ERROR:nemoguardrails.actions.action_dispatcher:Invalid LLM response: `user intent: user asked for spelling of a name`
Traceback (most recent call last):
File "/Users/vrige/Library/Caches/pypoetry/virtualenvs/llm-gateway-C7tEgGBA-py3.10/lib/python3.10/site-packages/nemoguardrails/actions/v2_x/generation.py", line 813, in generate_value
return literal_eval(value)
File "/Users/vrige/.pyenv/versions/3.10.15/lib/python3.10/ast.py", line 64, in literal_eval
node_or_string = parse(node_or_string.lstrip(" \t"), mode='eval')
File "/Users/vrige/.pyenv/versions/3.10.15/lib/python3.10/ast.py", line 50, in parse
return compile(source, filename, mode, flags,
File "<unknown>", line 1
user intent: user asked for spelling of a name
^^^^^^
SyntaxError: invalid syntax
Hi @vrige,
Thank you for your report! Colang 2 currently supports and is tested only with OpenAI model versions >= gpt-3.5-turbo and llama3.*-8b/70b/405b. As you reported, Gemini models generate an incompatible response that cannot be parsed.
To fix that we would need to add a new Gemeni specific prompt template nemoguardrails/llm/prompts/gemini.yml that is adapted/tuned to generate the right responses. If you want to try this yourself, start by making a copy of openai-chatgpt.yml and adapt all the prompts for the Gemini model and compare its results to the one from gpt. Alternatively, you can get started quickly by adding this new template prompts to your bot configuration YAML directly without changing the guardrails library. See tests/test_configs/with_prompt_override/config.yml for an example.