Siyu Yuan
Siyu Yuan
I also have the same error:(
> According to the OpenAI's documentation, the logprobs are returned in a different format when using the Chat Completion API, which is different from the format used in the old...
Thank you for your response! I have tried use Langchain to get logprobs but it is not work:( For your reference: ``` chat = ChatOpenAI(model=model_name, temperature=0, model_kwargs={"logprobs": True, "top_logprobs": 1})...
@anon998 Sorry to bother you. ``` result = openai.ChatCompletion.create( temperature=0, model=model, messages=[{"role": "user", "content": "Hi!"}], max_tokens=500, top_logprobs=2, logprobs=True,) ``` This is my code. However, the result is: ``` JSON: {...
@anon998 Thank you so much! After I update vllm, it works!!
> I will add documentation for Official API later today wow! Thank you sooo much:)
> @siyuyuan Released! https://github.com/acheong08/ChatGPT/releases/tag/1.0.1 Thank you! But how much about the tokens for chatgpt api? is it same as text-davinci-003?
> @acheong08 Reviewing this issue side-by-side with the same Microsoft Account authenticated, it seems to be fine to spit out an answer at the exact same moment as it fails...
> @siyuyuan Released! https://github.com/acheong08/ChatGPT/releases/tag/1.0.1 wonderful!!!!! Through your code, it is more convenient and faster for me to use chatgpt, thank you very much!!
> I don't either. Wait a while for the servers to recover from overload i have the same problem `openai.error.ServiceUnavailableError: The server is overloaded or not ready yet.`