supercocoa
supercocoa
We use triton inference server for online inference, Can deeprec processor be used in triton inference server?
模型训练时输入是userId/ItemId这些特征,训练好后模型很大,想把训练好的模型里把Embedding层单独拿出来存Redis,然后推理的时候从redis查询embedding后直接给模型serving推理,这个用easyrec如何实现导出这样的模型? 参考:[embedding层单独拿出来存Redis,Redis放不下就可以放Hbase](https://www.zhihu.com/question/354086469/answer/894235805)
看了下文档,hape多机模式是通过ssh到各个节点然后通过docker拉起服务,很多环境没法这么操作,能否支持下直接通过k8s拉起?
除了hdfs是否支持oss做存储?貌似只看到hdfs的文档和代码
1. 对于全量表,看文档首次create后用swift写新数据,假如增量数据较多想重新用bs完整构建索引,然后应该用什么命令才能更新这个新的全量索引?还是说只能删除旧表然后新建一张表?或者是像ES一样有别名机制可以无缝切换不影响线上查询? 2. 对于直写表,用sql insert写入数据,是否还支持update / delete等sql? 3. 全量表与直写表线上查询时性能是否有差异?
### Initial Checks - [X] I have searched Google & GitHub for similar requests and couldn't find anything - [X] I have read and followed [the docs & demos](https://github.com/modelscope/modelscope-agent/tree/master/demo) and...
## Issue Description k8s pod里SystemStatusListener获取到的是物理机的load不是pod的load ### Describe what you expected to happen 可以获取到pod或容器里的load ### Tell us your environment k8s
### Initial Checks - [X] I have searched Google & GitHub for similar requests and couldn't find anything - [X] I have read and followed [the docs & demos](https://github.com/modelscope/modelscope-agent/tree/master/demo) and...
RT. 起了三个节点构成ha集群,用java neo4j bolt客户端连接其中某一个node执行操作,未能同步到其他node上。是不是neo4j的bolt客户端不支持ha模式? tugraph版本: 4.3.0