Eladlev

Results 42 comments of Eladlev

Hi, Observe that you modify the annotator to be an LLM estimator. However, the prompt for the annotator asks the model to classify 'Yes' or 'No', where the ranker labels...

Hi @danielliu99, 1. At least the predictor should not be blank (after the iteration is completed) 2. If you are using an LLM ranker, then you should skip the ranking...

Could you provide the exact configuration? In which mode did you try to run the code?

When working with huggingfacepipeline you should change the meta prompts to the completion format in the config_default file (the default meta prompts are using openAI functions): ``` meta_prompts: folder: 'prompts/meta_prompts_completion'...

Sounds like a great feature doing this separation

We are using langchain pipeline and it seems that there is an issue with the model prompt template: https://github.com/PromtEngineer/localGPT/issues/340

We also support the 'completion' mode (for non-OpenAI models), this can be done by changing the meta prompts to meta_prompts_completion Regarding supporting more methods to enforce the structure, we would...

Local models are supported via langchain HuggingFacePipeline: https://github.com/Eladlev/AutoPrompt/issues/40#issuecomment-2016365671

Hi, Few remarks: 1. It seems that your ranker is too 'weak', he only requires that the SQL query will be relevant, so it is very hard to generate synthetic...

You can see here for more info on langchain HuggingFacePipeline: https://python.langchain.com/docs/integrations/llms/huggingface_pipelines/ You need to put the model id in the config file as the 'name', see here: https://github.com/Eladlev/AutoPrompt/issues/40#issuecomment-2016365671 This is...