Useing RAG with openai multimodal agent
Is there a sample code or can you guide to pass additional context to llm, like in this pipeline agents example with new openai multimodal example?
https://github.com/livekit/agents/blob/main/examples/voice-pipeline-agent/simple-rag/assistant.py
It's not simple to do RAG in the same way today due to some differences between the approaches: https://docs.livekit.io/agents/voice-agent/#Multimodal-or-Voice-Pipeline
I expect this to change very quickly as they iterate on the model/API
HELLLLLLOOOOOOOOOOOOOOOOOOOOOO ¿Existe un código de muestra o puede guiarnos para pasar contexto adicional a llm?
¿Existe un código de muestra o puede guiarnos para pasar contexto adicional a llm?