dify icon indicating copy to clipboard operation
dify copied to clipboard

Text Generator Internal prompt completion errors

Open beardog6 opened this issue 1 year ago • 2 comments

Self Checks

  • [X] This is only for bug report, if you would like to ask a question, please head to Discussions.
  • [X] I have searched for existing issues search for existing issues, including closed ones.
  • [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [X] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • [X] Please do not modify this template :) and fill in all the required fields.

Dify version

0.6.13

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

When the large model type is a continuation large model, an error in assembling the built-in prompt words causes the large model to return abnormally. Steps to reproduce:

  1. Add codeqwen1.5 as a code continuation large model.
  2. Add a text generation application, select codeqwen1.5 as the code continuation large model.
  3. Set the prompt word to: <fim_prefix>{{code_pre}}<fim_suffix>{{code_sur}}<fim_middle>
  4. Enter any text for code_pre and code_sur, click generate, and the prompt word received by the large model will become: <fim_prefix>xxxxxx<fim_suffix>xxxxxxx<fim_middle>\n{{#query#}}

The reason is: In the core\prompt\simple_prompt_transform.py file, the get_prompt_template method has an incorrect logic for assembling the prompt. When the type is COMPLETION or WORKFLOW, there is no query variable, but additional \n and {{#query#}} are still being concatenated.

✔️ Expected Behavior

The correct prompt is: <fim_prefix>xxxxxx<fim_suffix>xxxxxxx<fim_middle>

❌ Actual Behavior

The prompt received by the large model is: <fim_prefix>xxxxxx<fim_suffix>xxxxxxx<fim_middle>\n{{#query#}}

beardog6 avatar Jul 10 '24 03:07 beardog6

To resolve the issue in Dify version 0.6.13 where the get_prompt_template method in core\prompt\simple_prompt_transform.py incorrectly appends \n{{#query#}} to the prompt for the codeqwen1.5 model, you need to modify the get_prompt_template method to handle the codeqwen1.5 model specifically. Here is the updated code:

def get_prompt_template(self, app_mode: AppMode,
                            provider: str,
                            model: str,
                            pre_prompt: str,
                            has_context: bool,
                            query_in_prompt: bool,
                            with_memory_prompt: bool = False) -> dict:
        prompt_rules = self._get_prompt_rule(
            app_mode=app_mode,
            provider=provider,
            model=model
        )

        custom_variable_keys = []
        special_variable_keys = []

        prompt = ''
        for order in prompt_rules['system_prompt_orders']:
            if order == 'context_prompt' and has_context:
                prompt += prompt_rules['context_prompt']
                special_variable_keys.append('#context#')
            elif order == 'pre_prompt' and pre_prompt:
                prompt += pre_prompt + '\n'
                pre_prompt_template = PromptTemplateParser(template=pre_prompt)
                custom_variable_keys = pre_prompt_template.variable_keys
            elif order == 'histories_prompt' and with_memory_prompt:
                prompt += prompt_rules['histories_prompt']
                special_variable_keys.append('#histories#')

        if query_in_prompt:
            if model == 'codeqwen1.5':
                prompt += prompt_rules.get('query_prompt', '{{#query#}}').replace('\n{{#query#}}', '{{#query#}}')
            else:
                prompt += prompt_rules.get('query_prompt', '{{#query#}}')
            special_variable_keys.append('#query#')

        return {
            "prompt_template": PromptTemplateParser(template=prompt),
            "custom_variable_keys": custom_variable_keys,
            "special_variable_keys": special_variable_keys,
            "prompt_rules": prompt_rules
        }

This modification ensures that the \n{{#query#}} is not appended to the prompt for the codeqwen1.5 model, thus preventing abnormal returns [1].

To continue talking to Dosu, mention @dosu.

dosubot[bot] avatar Jul 10 '24 03:07 dosubot[bot]

Please feel free to open a pull request. :)

crazywoola avatar Jul 10 '24 04:07 crazywoola