ApeRAG
ApeRAG copied to clipboard
ApeRAG: Production-ready GraphRAG with multi-modal indexing, AI agents, MCP support, and scalable K8s deployment
## Describe the bug When Aperag (via mcp_agent) calls a tool and sends the request to a self-hosted vLLM OpenAI-compatible server, the outgoing payload contains messages[*].content as a nested list...
## Describe the bug When uploading several documents at once (e.g., 3+), the Celery worker intermittently crashes or logs repeated asyncio errors. The error originates from LiteLLM’s async logging worker:...
## Describe the bug There isn't a way to force the language of the graph descriptions. I have a 50 page PDF that is in English and for some reason,...
# Background Aperag currently supports Nebula Graph as one of its graph storage backends. # Proposal Remove Nebula Graph integration from Aperag, including: - Remove Nebula-specific connector and adapter code...
Hello, thanks for the awesome package. When I try to use local models from LM Studio, and I get this image in the picture that says the model has to...
[BUG]
I'm trying to use the VLLM model for an LLM, but it's encountering an error where it can't provide answers to questions. How do I resolve this??
## Describe the bug I added a local ollama LLM Provider in the settings following the documentation adding process. The problem is that when I try to chat with any...
## 🧠 Summary `celery-worker` pods are being **OOMKilled** repeatedly due to excessive memory usage when loading data from very large PostgreSQL tables (`lightrag_vdb_relation` and `lightrag_vdb_entity`). --- ## 🐛 Problem Description...