Chandra Irugalbandara
Chandra Irugalbandara
## Overview I've implemented a modular Python package for evaluating the factuality of LLM responses based on the research paper "Long-form factuality in large language models" by Wei et al....
Use something other than Loguru for better nice-looking execution display
```python llm = OpenAI() def wikipedia_search(query: str) -> str: # wikipedia calling logic @llm.agent("Answer the Question", tools=[wikipedia_search]) def answer(question: str) -> Semantic[str, "answer to the question"]: ... ``` This will...
For example ```python class Apple: type: FruitType description: str color: str ``` In the type definitions, only the explanation of Apple is there and FruitType is not available.