bisheng icon indicating copy to clipboard operation
bisheng copied to clipboard

BISHENG is an open LLM devops platform for next generation Enterprise AI applications. Powerful and comprehensive features include: GenAI workflow, RAG, Agent, Unified model management, Evaluation, SF...

Results 185 bisheng issues
Sort by recently updated
recently updated
newest added

Uploading multiple documents (>10) at once to the knowledge pool sometime fails due to mysql queuepool limit, (pool_size default at 5). Increasing pool_size addresses this issue.

我本来尝试使用阿里云的模型API-key进行调用,都是在技能设置界面,我不知道该怎么搭配才能接入模型,下面是阿里云的提示: “HTTP接口使用POST方式请求https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation ,获取大模型推理结果。”

**backend的日志:** `Error:`run` not supported when there is not exactly one output key. Got ['result', 'source_documents'].Traceback (most recent call last): chat_id='f526449bec42c8349f3a81cbefbb 4d32' user_id='1' liked=0 solved=0 sender=None receiver=None intermediate_steps="分析出错,Error: `run` not supported...

bisheng\src\backend\bisheng\api\v1\knowledge.py vectore_client.add_texts 向量存储时没有对texts进行拆分存储,导致pos请求9001 milvus-minio服务时没有向量结果返回 fix建议:对texts进行裁剪分批调用vectore_client.add_texts

请问关于DCU的支持目前是什么进度?是需要自行适配triton然后直接部署毕昇吗?有没有什么参考的方法或者bisheng内部目前有什么可以了解的进展?谢谢!

报错类似字符集编码问题报错 分析出错,Error: 'latin-1' codec can't encode characters in position 0-31: ordinal not in range(256)

请问可以和fastgpt类似,实现llm使用时的提示词和输入输出的日志方便调试吗

RT: 0.0.5 model config ```bash { "parameters": { "type": "dataelem.pymodel.vllm_model", "decoupled": "1", "pymodel_type": "llm.vLLMQwen7bChat", "pymodel_params": "{\"temperature\": 0.0, \"stop\": [\"\", \"\",\"\"]}", "gpu_memory": "20", "instance_groups": "device=gpu;gpus=0", "reload": "1", "verbose": "0" } }...

想知道使用langchain对网页接口的解析部分源码封装在哪里,想仿照实现一下。