haystack
haystack copied to clipboard
LLM Eval - Implement custom LLM evaluator component in core
Implement a LLMEvaluator component with the following parameters:
-
api- API to use (openai, cohere, etc) -
api_key- API secret -
inputs- tuple of expected inputs- Will act as the sockets for the component
-
outputs- keys to look up in the output JSON -
instruction- prompt to the LLM describing what it should do- Prompt should always return a json string
- We can (try to )ensure this by modifying the prompt internally
-
examples- One or more examples conforming to the input and output format
The component sets up the backend, validates/formats the inputs, calls the backend, validates the output and returns them.