貌似可以用PP-OCRv4了
见帖子 https://github.com/PaddlePaddle/FastDeploy/issues/2571 快更新下
对,前段时间我也看到了,但是最近比较忙,有空我加一下
也欢迎提 PR
我看了文档 models_list 说paddle lite的模型其实可以根据paddle ocr的模型用opt工具导出.nb模型,于是可以用ch_PP-OCRv4导出.nb给paddle lite使用,我已经根据文档 model_optimize_tool 生成了.nb模型,paddle lite 部署在手机上 但是很奇怪我的v4比v2慢了一倍
paddle lite 升级了吗?
可以直接下载v4模型放在assets里吗?
@dillonliu123 v4模型目前用不了
I read the document models_list which said that the paddle lite model can actually be exported as a .nb model using the opt tool based on the paddle ocr model, so I can use ch_PP-OCRv4 to export the .nb model for paddle lite. I have generated the .nb model according to the document model_optimize_tool , and paddle lite is deployed on the phone. But strangely, my v4 is twice as slow as v2.
How can you run rec.nb of PaddleOCRv4? Where did you make modifications? Can you show me? I remember that PaddleOCRv4 was trained with dstHeight = 48, right?
I read the document models_list which said that the paddle lite model can actually be exported as a .nb model using the opt tool based on the paddle ocr model, so I can use ch_PP-OCRv4 to export the .nb model for paddle lite. I have generated the .nb model according to the document model_optimize_tool , and paddle lite is deployed on the phone. But strangely, my v4 is twice as slow as v2.
i change REC_IMAGE_SHAPE = {3, 32, 320} to REC_IMAGE_SHAPE = {3, 48, 320} in ocr_crnn_process.cpp but still got error
paddle-lite目前已经支持v4了,可以替换最新的库进去编一下,效果还可以
@dengwhao 是的,我前段时间看到之后就想更新一下的,奈何我现在手头只有 Windows ,根本编译不动……官方又没有提供预编译库😭
欢迎老哥测试没问题之后提一个 PR
@equationl 我是用shell命令行跑在机器上的,目前看效果一般,当然比v2效果要好。python上用paddleocr库是可以准确识别下方的英文的,paddlelite没有验证过,pip安装的还是2.13版本的库,无法排除是否是转换模型导致误差
开发板(armeabi-v7a)效果:
The detection visualized image saved in ./test_img_result.jpg
0 太阳是绕着我们转话 0.972321
1 我们仍然认为我们是字宙的中心 0.965145
2 wedstiltinwerthenteftiveethatthisorbiing 0.737318
花费了5.55687秒
我自己手机(arm64-v8a)测试效果:
The detection visualized image saved in ./test_img_result.jpg
0 太阳是绕着我们转话 0.972321
1 我们仍然认为我们是字宙的中心 0.965145
2 wedstiltinwerthenteftiveethatthisorbiing 0.737318
花费了0.74241秒
上面这个有误差的问题是我没有改识别那里的 32 -> 48导致的。但是我不理解一点就是jni代码和官方 shell推理还存在什么差异吗?为啥app结果会和shell不一致
paddle-lite目前已经支持v4了,可以替换最新的库进去编一下,效果还可以
可以发出来试试识别效果吗?
@dengwhao app 做了一些预处理,可能有些预处理反而是负优化吧,哈哈哈
@equationl @ljg208 提了pr,可以合并编译跑一下试试效果#57
已合并并发布 v1.2.9