Enable more complex `prompts`
The current generator interface expects to receive prompts as str see: https://github.com/leondz/garak/blob/4127ae5092ad3acaba680a32011018fc564cc92a/garak/generators/base.py#L66
This initial simple submission process has worked to date; however #587 show an example of a query prompt that needs a more complex structure. In this case the Multi-modal model accepts both text and image data to generate a response.
I propose an added abstraction layer by implementing a Prompt base interface class that be extended to model these more complex prompts to be processed by each generator.
def generate(self, prompt: Prompt) -> List[str]:
or possibly also abstracting the response as well:
def generate(self, prompt: Prompt) -> List[PromptResponse]:
Prompts can then be further segmented into things like TextPrompt, MultiStepTextPrompt, VisualPrompt, VisualTextPrompt and other such constructs to that on the base functions available to allow use with different and even mixed prompt modalities for models that can accept various input patterns.
Rough example:
class Prompt:
text = None
def str(self)
return self.text
class TextPrompt(Prompt):
def __init__(self, text: str):
self.text = text
class VisualTextPrompt(Prompt):
image
def __init__(self, text: str, image_path: str):
self.text = text
try:
Image.open(image_path)
except Exception:
logger.error(f"No image found at: {image_path}")
Another recent finding related to multi-modal prompts is a need to define relationships between parts of the prompt. The case identified is that some models request formats may have different expectations for referencing images in text. The current visual_jailbreak prompts include a placeholder in the text segment of the prompt that some models may need to remove or replace with an API specific linking/embedding.
This issue has been automatically marked as stale because it has not had recent activity. If you are still interested in this issue, please respond to keep it open. Thank you!