在华为昇腾Atlas 300推理卡上编译安装FastDeploy,后尝试部署uie模型出现问题。[ERROR] fastdeploy/runtime/runtime_option.cc(181)::UsePaddleBackend The FastDeploy didn't compile with Paddle Inference.
环境
- 【FastDeploy版本】: npu自行编译
- 【编译命令】如果您是自行编译的FastDeploy,请说明您的编译方式(参数命令)
- 【系统平台】: Linux arm(Ubuntu 18.04)
- 【硬件】: 昇腾Atlas 300
- 【编译语言】: Python3.7
问题日志及出现问题的操作流程
- 【模型跑不通】 root@ecs-1e0c:/Work# cd /Work/FastDeploy/examples/text/uie/python root@ecs-1e0c:/Work/FastDeploy/examples/text/uie/python# python infer.py --model_dir ./uie-base --device cpu [ERROR] fastdeploy/runtime/runtime_option.cc(181)::UsePaddleBackend The FastDeploy didn't compile with Paddle Inference. Aborted (core dumped) root@ecs-1e0c:/Work/FastDeploy/examples/text/uie/python#
环境
- 【FastDeploy版本】: npu自行编译
- 【编译命令】如果您是自行编译的FastDeploy,请说明您的编译方式(参数命令)
- 【系统平台】: Linux arm(Ubuntu 18.04)
- 【硬件】: 昇腾Atlas 300
- 【编译语言】: Python3.7
问题日志及出现问题的操作流程
- 【模型跑不通】 root@ecs-1e0c:/Work# cd /Work/FastDeploy/examples/text/uie/python root@ecs-1e0c:/Work/FastDeploy/examples/text/uie/python# python infer.py --model_dir ./uie-base --device cpu [ERROR] fastdeploy/runtime/runtime_option.cc(181)::UsePaddleBackend The FastDeploy didn't compile with Paddle Inference. Aborted (core dumped) root@ecs-1e0c:/Work/FastDeploy/examples/text/uie/python#
改为使用对应推理引擎或接口后,出现如下错误
root@ecs-1e0c:/Work/FastDeploy/examples/text/uie/python# python3 infer.py --model_dir ./uie-base/ --device npu --backend npu
Namespace(backend='npu', batch_size=1, cpu_num_threads=8, device='npu', device_id=0, max_length=128, model_dir='./uie-base/', use_fp16=False)
Set device ok!!!
RuntimeOption(
backend : Backend.LITE
cpu_thread_num : -1
device : Device.???
device_id : 0
external_stream : None
model_file :
model_format : ModelFormat.PADDLE
model_from_memory : False
openvino_option : <fastdeploy.libs.fastdeploy_main.OpenVINOBackendOption object at 0xffff69c303f0>
ort_option : <fastdeploy.libs.fastdeploy_main.OrtBackendOption object at 0xffff69c303f0>
paddle_infer_option : <fastdeploy.libs.fastdeploy_main.PaddleBackendOption object at 0xffff69c303f0>
paddle_lite_option : <fastdeploy.libs.fastdeploy_main.LiteBackendOption object at 0xffff69c303f0>
params_file :
poros_option : <fastdeploy.libs.fastdeploy_main.PorosBackendOption object at 0xffff69c303f0>
trt_option : <fastdeploy.libs.fastdeploy_main.TrtBackendOption object at 0xffff69c303f0>
)
./uie-base/inference.pdmodel ./uie-base/inference.pdiparams ./uie-base/vocab.txt
[ERROR] fastdeploy/fastdeploy_model.cc(124)::InitRuntimeWithSpecifiedBackend The valid ascend backends of model UIEModel are [], Backend::PDLITE is not supported.
Traceback (most recent call last):
File "infer.py", line 140, in
遇到了同样的问题,请问解决了吗?