tfk12

Results 12 comments of tfk12

> Well I think I've found the reason. I was using the API key generated before and that went well with "openai.Completion.create" and text-davinci-003 model. After I regenerated a new...

> Well that will be nice of you. By the way, I wonder if you know how to set the conversation to be more than one round. At :https://help.openai.com/en/articles/7039783-chatgpt-api-faq, I...

> You have to include array including past conversation in the request because it was stated that model has not memory of past conversations: `messages=[ {"role": "system", "content": "You are...

我试了一下,事实上直接在代码里加上azure独有的参数即可。比如openai.api_base, openai.api_type。当然后续做在apikey.ini里可能更方便。

> Hi, we have a generation example with all the details: https://github.com/Eladlev/AutoPrompt/blob/main/docs/examples.md#generating-movie-reviews-generation-task > > Let me know if something is not clear Thank you for your instant response. I have...

and I still got an error like others 'Keyerror' related to samples **### Here is my default_config:** use_wandb: False dataset: name: 'dataset' records_path: null initial_dataset: '' label_schema: ["Yes", "No"] max_samples:...

> and I still got an error like others 'Keyerror' related to samples > > Here is my default_config: > > use_wandb: False dataset: name: 'dataset' records_path: null initial_dataset: ''...

> It seems like the model was not able to generate samples. I'm guessing that it's either issues with the connection to your Azure account, or you are using GPT-3.5...

log file content: 2024-07-17 16:22:22,175 - DEBUG - load_ssl_context verify=True cert=None trust_env=True http2=False 2024-07-17 16:22:22,176 - DEBUG - load_verify_locations cafile='/home/tfk/miniconda3/envs/py311/lib/python3.11/site-packages/certifi/cacert.pem' 2024-07-17 16:22:22,187 - DEBUG - load_ssl_context verify=True cert=None trust_env=True http2=False...

I have changed to mdoel to gpt-4-1106-preview, and error remains the same. Can you please help me check with the prompt files I use? predictor: method : 'llm' config: llm:...