generative-ai-python icon indicating copy to clipboard operation
generative-ai-python copied to clipboard

stop_resaon is alwasy STOP, and usage_meta is consistently empty

Open Andy963 opened this issue 1 year ago • 2 comments

Description of the bug:

version: 0.54 python 3.9

The stop reason is always "STOP", never change, and usage_metadata is always empty

Actual vs expected behavior:

 async for item in resp:
            c = item.candidates
            print("stop_reason", c[0].finish_reason, f"usage:{item.usage_metadata}:end")

output:

stop_reason FinishReason.STOP usage::end # usage is empty
stop_reason FinishReason.STOP usage::end
stop_reason FinishReason.STOP usage::end
answer is: I am a large language model, trained by Google. I do not have a name. 

input_tokens 5  # this is counted by await client.count_tokens_async not from usage_metadata
output_tokens 20

the first stop reason should be empty or None or start, but not stop, and usage_metadata contains the prompt_tokens, output_tokens, but it's empty,

if it's non stream mode, the "token_count" field is always 0,

GenerateContentResponse(
    done=True,
    iterator=None,
    result=glm.GenerateContentResponse({
      "candidates": [
        {
          "content": {
            "parts": [
              {
                "text": "Hello! \ud83d\udc4b  How can I help you today? \ud83d\ude0a \n"
              }
            ],
            "role": "model"
          },
          "finish_reason": 1,
          "index": 0,
          "safety_ratings": [
            {
              "category": 9,
              "probability": 1,
              "blocked": false
            },
            {
              "category": 8,
              "probability": 1,
              "blocked": false
            },
            {
              "category": 7,
              "probability": 1,
              "blocked": false
            },
            {
              "category": 10,
              "probability": 1,
              "blocked": false
            }
          ],
          "token_count": 0,  # this field is alway 0
          "grounding_attributions": []
        }
      ]
    }),
)
answer is: Hello! 👋  How can I help you today? 😊 

input_tokens 1 # this is count by client.count_tokens not from usage_metadata
output_tokens 14

Was this behavior caused by a bug, or is the server not yet equipped to handle this?

Any other information you'd like to share?

package version: 0.5.4 python 3.9

Andy963 avatar May 19 '24 14:05 Andy963

I am also getting the issue of usage_metadata always being empty, even when running Gemini Cookbook notebook code directly

GMC-Nickies avatar May 20 '24 14:05 GMC-Nickies

Any idea? I'm trying to translate a text that gemini created by itself and I receive also

StopCandidateException(index: 0
finish_reason: OTHER
)

It's killing my production launch. :'(

model_name="gemini-1.5-flash"
safety_settings={
    HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
    HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
    HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
    HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
}

jpchavat avatar Aug 07 '24 16:08 jpchavat

Hi, sorry about the slow response here.

This seems to be fixed. I've seen just about every stop reason, and usage-metadata is consistently populated.

MarkDaoust avatar Feb 19 '25 15:02 MarkDaoust